Reliability Growth Models


Reliability Growth Modeling

A reliability growth model is a simulation of how system dependability evolves overtime throughout the testing process. When system failures are identified, the underlying flaws that are generating these failures are corrected, and the system's dependability should improve through system testing and debugging. The conceptual reliability growth model must next be converted into a mathematical model in order to forecast dependability.

Reliability growth modeling entails comparing observed reliability at various periods in time with known functions that demonstrate potential changes in reliability. An equal step function, for example, implies that the dependability of a system rises linearly with each release. It is feasible to forecast the system's dependability at some future point in time by comparing observed reliability increase with one of these functions. As a result, reliability growth models may be utilized to help in project planning.

The reliability growth model group measures and forecasts the improvement of reliability programs through testing. The growth model depicts a system's dependability or failure rate as a function of time or the number of test cases. The models in this category are listed below.

Coutinho Model

In the log-log paper, Coutinho charted the cumulative number of defects identified and the number of corrective measures taken vs. the cumulative testing weeks. Let N(t) represent the total number of failures and t represent the entire testing duration. The model's failure rate, lambda (t), may be represented as,

$$\mathrm{λ_{(t)} =\frac{N(t)}{t}= β_{0}t^{-β1}}$$

where $β_{0}$ are $β_{1}$ model parameters. This model's parameters may be estimated using the least-squares approach.

Wall and Ferguson Model

The total number of failures at time t, m(t), may be written as,

$$\mathrm{m_{(t)} = a_{0}[b(t)]^{3}}$$

where alpha0: and alpha1: are unidentified parameters The number of test cases or total testing time can be used to calculate the function b(t). Similarly, at time t, the failure rate function is given by,

$$\mathrm{λ_{(t)} = m^{'(t)} = a_{0}βb^{'(t)[b(t)]^{β-1}}}$$

Wall and Ferguson evaluated their model using a variety of software failure data and discovered that the failure data correlated well with the model.

Models of Reliability and Growth Have Been Criticized

The area of reliability growth modeling appears to be confronting the same fundamental, i.e. philosophical, issues as so-called historicism. Long-term predictions, according to Popper, cannot be applied to the social structures in which we are all entangled. And "it is prudent to tackle the most pressing and genuine societal problems one by one, right now" 

When I first encountered the dependability growth models, I felt obliged to apply Popper's principles of falsifiability, corroboration, and simplicity to them. The following is an evaluation of reliability growth models in terms of their prognostic power and capacity to enable mistake learning. This is done in accordance with Popper's criterion and using statistical analysis. The following questions must be addressed: What is the prediction power of the very complex dependability growth models? Is the work worth the outcome, or is it perhaps deceptive? What can reliability growth models teach us? Which approaches, if any, are better suited to the work of the software engineer?

The Benefits of Reliability Growth Models

The classical dependability hypothesis is widely supported: Much time and effort have gone into collecting, categorizing, and analyzing failure rate statistics for hardware components (Shooman, 1990). The life period (time to the first failure) of hardware components can be modeled as an exponentially distributed random variable under specific conditions. The chance that the time to failure is higher than t is defined as the reliability function R(t) of an item (component or system). This function is provided by R(t) = with a constant failure rate l.

Let us now consider a program that has certain mistakes. A constant failure rate l can be expected on the assumption of a constant operating profile. The reliability function appears similar to the one shown above for hardware failures.

There is one significant difference: failure is now regarded as a volatile event that does not result in a permanent defect. The flaw was present from the start.

We anticipate some fault-removal work. As a result, a new and unexpected failure rate emerges. The evaluation of failure rates based on previous experience appears to be unachievable from the start.

What is there to be done? We can try to pull ourselves out of these binds by our own bootstraps. Every so-called Reliability Growth Model (RGM) is based on certain assumptions about how failure rates vary as a result of fault elimination.

This assumption lies at the heart of the corresponding model. It is intended to convey the RGM's empirical substance.

One common assumption is that of the Jelinski-Moranda model (Lyu, 1996): By removing defects, the failure rate will be decreased by a constant value for all faults.

The main concern is whether such assumptions can be validated.

Prediction of Trivial Reliability

The "trivial prediction" is a strategy used in weather forecasting: the weather will be the same tomorrow as it is now. This little prognosis has some predictive power. I was curious whether a simple dependability growth model could compete with more complicated models based on statistical criteria.

The next Trivial Reliability Prediction (TRP) model makes no assumptions about reliability growth. It applies to systems with a fixed failure rate. Under this premise, the mean time between failures, or its reciprocal, the failure rate, may be calculated using the times between failures in the past: The mean value of the most recent - say, five or ten - execution times between subsequent failures is used to determine the mean time between failures.

The TRP is a basic model that is often used in hardware reliability. It has all of the components that any well-accepted theory should have −

The notion is simple to grasp, and any engineer will be able to see its limitations.

The model's applicability assumptions are well grasped.

Using the Poisson distribution, the precision of the estimations - the so-called confidence interval - can be easily determined.

Why do we place the TRP in competition with more detailed models assuming reliability increases, knowing that it is predicated on the premise of unchanged reliability? We don't know how the failure rate changes when the defect is removed. The system might undergo significant transformation, for the better or for the worse. However, it appears to be acceptable very often to assume no change at all, because fault elimination has only minimal effects on system dependability.

Popper's Criterion

Reliability growth models are designed to forecast software behavior based on prior experience. In this scenario, previous experience is dependent on historical data; predictions cannot be validated by trials.

Knowledge is synthetic (or empirical) insofar as it involves some assumptions that are not valid a priori and must be validated by experience. It is analytic in those sections that rely exclusively on logical reasoning and mathematics. Popper's (1980) criteria for the synthetic or empirical substance of prediction techniques - the predictive strength of the methods - are as follows −

Falsifiability: No prediction technique, regardless of predictive value (or predictive power), can be correct in all cases. There must be a chance of failure. A statement such as "It will rain or not rain here tomorrow" is not considered empirical because it cannot be disputed.

  • Corroboration − A (falsifiable) prediction technique is considered to be corroborated when its predictive value has been shown under a wide range of situations.

  • Objectivity − Predictions and assertions are objective because they can be tested inter-subjectively.

  • Simplicity − A prediction technique should not rely on too many variables that can be changed. Because otherwise, its falsification elusion is too simple, and its predictive value is too low.

Models of Reliability Growth in Light of Popper's Criteria

In terms of impartiality, RGM is not inferior to other prediction approaches. However, RGMs, in my opinion, fail to meet all of Popper's criteria.

Popper's falsifiability criteria cannot be used in reliability growth models. This is mostly owing to the variety of models and parameters, making it virtually difficult not to find a model that fits with some given experimental or field data.

As a result, the models fail to meet the criteria of simplicity. "Simple" does not meant "easily comprehensible" in this situation. A (basic) straight-line fitting with certain plane points is more persuasive and has more empirical power than the fact that the points may be approximated by a higher-order curve (not simple).

As a result, these models cannot be confirmed (in the Popperian sense). In all of the model demos I've seen so far, the model is chosen and fitted to the data after the fact. On the basis of these models, I am unaware of any falsifiable and non-trivial prediction technique for software dependability.

Updated on: 30-Oct-2021

3K+ Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements