Software Reliability, Modeling and Uncertainty – An Independent Study Software Reliability, Reliability Models & Uncertainty Principles – An Independent Study SUBMITTED BY: Bhavya Jain [09ITMG1020CSE ] DEPARTMENT OF CSE & IT ITM UNIVERSITY, GURGAON, FORMLY KNOWN AS INSTITUTE FOR TECHNOLOGY & MANAGEMENT (AUTONOMOUS UNDER MDU, ROHTAK) Bhavya Jain [09ITMG1020CSE] Page 1 Software Reliability, Modeling and Uncertainty – An Independent Study CERTIFICATE

It is to certify that the work that has been presented in this Report entitled “Software Reliability, Reliability Models & Uncertainty Principles” submitted by Bhavya Jain (09ITMG1020CSE) to the department of CSE & IT, ITM College (Autonomous), Gurgaon, is an authentic record of students’ own work carried out under the guidance and supervision of Mr. Bablu Pandey, Assistant Professor, Department of CSE & IT. The work embodied in this report is original and was conducted at ITM University, Gurgaon. This report is an authentic work done by them under my supervision. Ms.

PRABHA SHARMA (Head of Department) CSE & IT Mr. Bablu Pandey (Mentor) Bhavya Jain [09ITMG1020CSE] Page 2 Software Reliability, Modeling and Uncertainty – An Independent Study ACKNOWLEDGEMENT It is distinct pleasure to express my deep sense of gratitude to my learned mentor Mr. Bablu Pandey, Assistant Professor, Department of CSE & IT, ITM College (Autonomous), Gurgaon, for his invaluable guidance, encouragement and patient review. Without his help and guidance, this study would have been rather impossible. I am grateful to him for introducing such an interesting topic to me for my work.

He has been very helpful and Cooperative as a mentor. I express my immense pleasure and thanks to all the teachers and staff of the Department of Computer Sciences and Information Technology Engineering for their cooperation and support. Last but not the least, I thank all others, and especially my classmates and my family members who in one way or another helped me in the successful completion of this work. Bhavya Jain (09ITMG1020CSE) Bhavya Jain [09ITMG1020CSE] Page 3 Software Reliability, Modeling and Uncertainty – An Independent Study

INTRODUCTION Software Reliability is the probability of failure-free software operation for a specified period of time in a specified environment. Software Reliability is also an important factor affecting system reliability. It differs from hardware reliability in that it reflects the design perfection, rather than manufacturing perfection. The high complexity of software is the major contributing factor of Software Reliability problems. Software Reliability is not a function of time – although researchers have come up with models relating the two.

The modeling technique for Software Reliability is reaching its prosperity, but before using the technique, we must carefully select the appropriate model that can best suit our case. Measurement in software is still in its infancy. No good quantitative methods have been developed to represent Software Reliability without excessive limitations. Various approaches can be used to improve the reliability of software, however, it is hard to balance development time and budget with software reliability. In software reliability modeling, the parameters of the model are typically estimated from the test data of the corresponding component.

However, the widely used point estimators are subject to random variations in the data, resulting in uncertainties in these estimated parameters. Ignoring the parameter uncertainty can result in grossly underestimating the uncertainty in the total system reliability. This report attempts to study and quantify the uncertainties in the software reliability 4modelling of a single component with correlated parameters and in a large system with numerous components. Bhavya Jain [09ITMG1020CSE] Page 4 Software Reliability, Modeling and Uncertainty – An Independent Study TABLE OF CONTENTS

INTRODUCTION ……………………………………………………………………………………………………………………….. 4 1. Software Reliability ……………………………………………………………………………………………………………. 6 1.. 1 1.. 2 2. 2.. 1 2.. 2 2.. 3 3. 3.. 1 3.. 2 3.. 3 3.. 4 4. The bathtub curve for Software Reliability ……………………………………………………………………… 6 Need for Software Reliability ………………………………………………………………………………………… Classification of Software Reliability Models………………………………………………………………… 10 Methodologies ………………………………………………………………….. ………………………………………. 11 General Monte Carlo Simulation Method ……………………………………………………………………… 12 Bayesian Analysis for Probability Distributions …………………………………………………………….. 14 Maximum Entropy Principle (MEP) …………………………………………………………………………….. 5 Extract Data From MEP ……………………………………………………………………………………………… 16 Monte Carlo Approach for System Uncertainty ……………………………………………………………… 19 Software Reliability Models ………………………………………………………………………………………………… 9 Uncertainty in S/W reliability …………………………………………………………………………………………….. 14

Case Study : Markov Model ………………………………………………………………………………………………. 21 CONCLUSION …………………………………………………………………………………………………………………………. 24 BIBLIOGRAPHY ……………………………………………………………………………………………………………………… 25 Bhavya Jain [09ITMG1020CSE] Page 5 Software Reliability, Modeling and Uncertainty – An Independent Study 1. Software Reliability

According to ANSI, Software Reliability is defined as: the probability of failure-free software operation for a specified period of time in a specified environment. Although Software Reliability is defined as a probabilistic function, and comes with the notion of time, we must note that, different from traditional Hardware Reliability, Software Reliability is not a direct function of time. Electronic and mechanical parts may become “old” and wear out with time and usage, but software will not rust or wear-out during its life cycle. Software will not change over time unless intentionally changed or upgraded.

Software Reliability is an important to attribute of software quality, together with functionality, usability, performance, serviceability, capability, install ability, maintainability, and documentation. Software Reliability is hard to achieve, because the complexity of software tends to be high. While any system with a high degree of complexity, including software, will be hard to reach a certain level of reliability, system developers tend to push complexity into the software layer, with the rapid growth of system size and ease of doing so by upgrading the software.

For example, large next-generation aircraft will have over one million source lines of software on-board; next-generation air traffic control systems will contain between one and two million lines; the upcoming international Space Station will have over two million lines on-board and over ten million lines of ground support software; several major life-critical defense systems will have over five million source lines of software. [Rook90] While the complexity of software is inversely related to software reliability, it is directly related to other important factors in software quality, especially functionality, capability, etc.

Emphasizing these features will tend to add more complexity to software. 1.. 1 The bathtub curve for Software Reliability Over time, hardware exhibits the failure characteristics shown in Figure 1, known as the bathtub curve. Period A, B and C stands for burn-in phase, useful life phase and end-of-life phase. A detailed discussion about the curve can be found in the topic Traditional Reliability. Bhavya Jain [09ITMG1020CSE] Page 6 Software Reliability, Modeling and Uncertainty – An Independent Study Figure 1.

Bathtub curve for hardware reliability Software reliability, however, does not show the same characteristics similar as hardware. A possible curve is shown in Figure 2 if we projected software reliability on the same axes. There are two major differences between hardware and software curves. One difference is that in the last phase, software does not have an increasing failure rate as hardware does. In this phase, software is approaching obsolescence; there is no motivation for any upgrades or changes to the software. Therefore, the failure rate will not change.

The second difference is that in the useful-life phase, software will experience a drastic increase in failure rate each time an upgrade is made. The failure rate levels off gradually, partly because of the defects found and fixed after the upgrades. Bhavya Jain [09ITMG1020CSE] Page 7 Software Reliability, Modeling and Uncertainty – An Independent Study Figure 2. Revised bathtub curve for software reliability The upgrades in Figure 2 imply feature upgrades, not upgrades for reliability. For feature upgrades, the complexity of software is likely to be increased, since the functionality of software is enhanced.

Even bug fixes may be a reason for more software failures, if the bug fix induces other defects into software. For reliability upgrades, it is possible to incur a drop in software failure rate, if the goal of the upgrade is enhancing software reliability, such as a redesign or reimplementation of some modules using better engineering approaches, such as clean-room method. 1.. 2 Need for Software Reliability 1.. 2. 1 1.. 2. 2 1.. 2. 3 1.. 2. 4 1.. 2. 5 1.. 2. 6 To determine whether the software can be released.

To know what resources are required to bring software to required reliability. Prioritise testing/inspection of modules having highest estimated fault content. Develop fault-avoidance techniques Minimize number of faults Prevent insertion of specific types of faults Bhavya Jain [09ITMG1020CSE] Page 8 Software Reliability, Modeling and Uncertainty – An Independent Study 2. Software Reliability Models A proliferation of software reliability models have emerged as people try to understand the characteristics of how and why software fails, and try to quantify software reliability.

Over 200 models have been developed since the early 1970s, but how to quantify software reliability still remains largely unsolved. Interested readers may refer to. As many models as there are and many more emerging, none of the models can capture a satisfying amount of the complexity of software; constraints and assumptions have to be made for the quantifying process. Therefore, there is no single model that can be used in all situations. No model is complete or even representative. One model may work well for a set of certain software, but may be completely off track for other kinds of problems.

Most software models contain the following parts: assumptions, factors, and a mathematical function that relates the reliability with the factors. The mathematical function is usually higher order exponential or logarithmic. Software modeling techniques can be divided into two subcategories: prediction modeling and estimation modeling. Both kinds of modeling techniques are based on observing and accumulating failure data and analysing with statistical inference. The major difference of the two models is shown in Table 1.

ISSUES DATA REFERENCE WHEN USED IN DEVELOPMENT CYCLE PREDICTION MODELS Uses historical data ESTIMATION MODELS Uses data from the current software development effort Usually made prior to development or test phases; can be used as early as concept phase TIME FRAME Usually made later in life cycle(after some data have been collected); not typically used in concept or development phases Predict reliability at some future Estimate reliability at either time present or some future time Table 1. Difference between software reliability prediction models and software reliability estimation models

Bhavya Jain [09ITMG1020CSE] Page 9 Software Reliability, Modeling and Uncertainty – An Independent Study 2.. 1 Classification of Software Reliability Models One of the early reliability models, which were based on hardware reliability concepts, was developed by Duane (1964). In the seventies, many software reliability models were proposed, developed and widely used. Since then, many different software reliability models have been developed and numerous researchers in software reliability engineering have attempted to categorize and classify them.

Musa and Okumoto (1984) classify the reliability models in terms of five attributes: (1) Time domain: either calendar or execution time. (2)Category: either finite or infinite number of failures. For the finite number of failures category models, there are a number of classes depending on the functional form of the failure intensity in terms of time. For in finite failure category models, there are a number of families depending on the functional form of the failure intensity in term of the expected number of failures experienced. 3) Type: the distribution of the number of failures experienced as a function of t. (4) Class: functional form of the failure intensity expressed in terms of time (for finite failure category only). (5) Family: functional form of the failure intensity expressed in terms of the expected number of failures experienced (for infinite failure category only). For the sake of simplicity, Musa and Okumoto first separate the finite from the infinite models. They then incorporate the five attributes as a guide to finding the relationship between the models; thus clarifying the comparison between the models.

The Simplicity of this classification explains its popularity of usage. Goel and Bastani (1985) define two main categories of software reliability models: (1) software growth reliability models that estimate reliability using the error history, and (2) statistical models that estimate the reliability using the results (success or failure) of executing test cases. The software growth reliability Models are classified based on the nature of the failures (Goel 1985): the time between failures models, failure counts models, and fault seeding and input domain based models.

Singpurwalla and Wilson (1999) categorize models into two major types: (1) Type I: time between successive failures models, which breaks down to failure rate Type I1 and random function Type I-2. (2) Type II: the number of failures up to a given time. Gokhale, Marinos and Triviedi (1996) classify reliability models as follows: (1) Datadomain models: A better reliability estimate can be achieved if all of the Bhavya Jain [09ITMG1020CSE] Page 10 Software Reliability, Modeling and Uncertainty – An Independent Study combinations of the inputs are identified and the outcomes are well observed.

To implement this theory, this model category is decomposed into fault-seeding models and input-domain models. (2) Time-domain models: model the failure process using the failure history to estimate the number of faults and the required test time to uncover these faults. Homogeneous Markov, non-homogeneous Markov, semi-Markov are models that belong to the time-domain model. Asad, Uttah, Rehman (2004) classify software reliability according to software development life cycle phases. Their classification is well defined and comprehensive. 2.. 2 Methodologies 2.. 2. 1 Markov Theory

Markov Modeling is a widely used technique in reliability analysis; it is flexible and effective to be implemented in reliability analysis for various computing systems. Xie et al. (2004) classify the Markov models into two major types: standard Markov models and non-standard Markov models, in which Markov property are not valid at all time. According to their time space and state space, Markov model is classified into four categories: discrete time Markov chain, continuous time Markov chain, discrete time continuous state Markov model, and continuous time continuous state Markov model.

For the first type of Markov model, discrete time Markov chain, the mathematical definition is Pr{X n+1 = j | X n = i, X n? 1 = in? 1,… , X0 = i0}= Pr{X n+1 = j | Xn = i}= Pij (1. 1) Where Xn=i denote the process in state i at time n, and Pij is named one step transition Probability from state i to state j. Discrete time Markov chain is a widely used technique in system reliability analysis. Wang (2002) use Markov chain to calculate the reliability of distributed computing system by introducing two reliability measures, which are Markov chain distributed program reliability (MDPR) and Markov chain distributed system reliability (MDSR).

Continuous-time Markov chain (CTMC) {X(t)}, having values on the discrete state space ? , is defined as the stochastic process satisfies following property: Pr{X (t + s) = j | X (s) = i, X (u),0 ? u ? s}= Pr{X (t + s) = j | X (s) = i} (1. 2) where s ? 0 , t>0 and each i, j ??. A CTMC’s future state depends only on the present state and is independent of past, given the present state. Bhavya Jain [09ITMG1020CSE] Page 11 Software Reliability, Modeling and Uncertainty – An Independent Study For CTMC models, we have the Chapman-Kolmogorov equation (Ross, 2000) as: 2.. 2. 2 Bayesian Approach

The Bayesian approach combines the prior knowledge/information of the unknown parameter with current data/observations to deduce the posterior probability distribution of the parameter. Moreover, this approach can also handle the correlation among those parameters by using the joint distributions. To estimate the parameters a={a1,a2,a3,… ,an} observation data s={s1,s2,s3,…. ,sn} are collected by repeated experiments. Then, given the prior distribution p(a) and observations s={s1,s2,s3,…. ,sn}, the posterior distribution can be The above standard Bayesian approach is well known and straightforward.

However, applying this to software reliability modeling poses several challenges specific to software testing and reliability. It is an important characteristic that the number of failure data is usually scarce in a single test. The lack of failure data in a project has challenged the modeling of software reliability, which makes estimating proper posterior distributions more difficult. 2.. 3 General Monte Carlo Simulation Method For the special normal case, the probability function of the output of the entire system can be obtained analytically.

However, for general distributions and more complex voting systems, closed analytical forms may not be obtained. In these cases, the Monte Carlo Simulation method is an efficient and effective alternative to evaluate the reliability of the complex system. To evaluate the effectiveness and accuracy of this method, we compare it with the analytical method proposed in the previous section. Considering the system in Figure 3. 1, all the parameters are kept unchanged, and if the output of the entire system is between (x-a, x+a) (a=0. 02), Bhavya Jain [09ITMG1020CSE] Page 12

Software Reliability, Modeling and Uncertainty – An Independent Study the output is considered to be correct. As we know, the accuracy of Monte Carlo Simulation method is greatly influenced by the sizes of samples we use to simulate. To analyze the accuracy of Monte Carlo Simulation method with the analytical method, the simulations based on five samples of different sizes are taken to obtain the reliability of the WVS presented in Figure 3. 1. Reliability here is calculated as the proportion of correct outputs out of the total number of outputs.

The program runtime (in seconds) is recorded and the error of the Monte Carlo Simulation method is compared to the result from analytical method in the following table. The error is defined as: Bhavya Jain [09ITMG1020CSE] Page 13 Software Reliability, Modeling and Uncertainty – An Independent Study 3. Uncertainty in S/W reliability Reliability modeling has gained considerable interest and acceptance by applying probabilistic methods to the real-world situation. A software usually contains one or more basic modules or components that are functioning together to achieve some tasks.

In order to apply the models to predict the reliability of the component, the parameters of the models need to be known or estimated. Parameter uncertainty arises when the input parameters are unknown. Moreover, the reliability computed from the models which are functions of these parameters is not sufficiently precise when the parameters are uncertain. Hence, it is necessary to determine the uncertainty in the parameters for the modeling work. However, one special characteristic of software reliability modeling or testing is insufficient failure data.

Failure data are usually scarce and limited to a single test. Insufficient failure data makes software reliability modeling difficult, and makes its uncertainty analysis much more challenging. 3.. 1 Bayesian Analysis for Probability Distributions We apply the Bayesian approach here to quantify the uncertainty in the component Parameters . This approach combines the prior knowledge/information of the unknown parameter with current data/observations to deduce the posterior probability distribution of the parameter. We apply this approach here to quantify the uncertainty about the component parameters.

Moreover, this approach can also handle the correlation among those parameters by using the joint distributions. 3.. 1. 1 Assumptions: 1)The mean value function of the model is denoted by m(t | a) v and the failure intensity function by (t | a) ?. 2) The prior joint distribution of the parameters is denoted by p(a) v which is unknown. 3) The component is tested and a total of n failures have been observed. Let k s denote the time to the k-th failure k (k=1,2,…,n), the vector of failure times which are conditionally independent. Bhavya Jain [09ITMG1020CSE] Page 14 Software Reliability, Modeling and Uncertainty – An Independent Study

The above standard Bayesian approach is well known and straightforward. However, applying this to software reliability modeling poses several challenges specific to software testing and reliability. It is an important characteristic that the number of failure data is usually scarce in a single test. The lack of failure data in a project has challenged the modeling of software reliability, which makes estimating proper posterior distributions more difficult. Fortunately, prior information such as expert knowledge, historical data from similar experiments are typically available.

Therefore, we propose to theoretically incorporate the experts’ suggestions and historical data from previous projects into the prior distribution in Eq. (6. 2), i. e. p(a) v . The following shows how to transform expert knowledge and historical data by integrating the Maximum-Entropy Principle (Kapur, 1989) into the Bayesian approach. 3.. 2 Maximum Entropy Principle (MEP) Though the single test in the current project lacks sufficient failure data for modeling, yet historical data, previous experiences, expert suggestions and other environmental information are useful.

For example, a development team should have the knowledge of developmental process, debugging method, test procedures and so on. The related information can be transformed into a prior distribution through the Maximum- Entropy Principle (MEP) method. MEP (Kapur, 1989) is a technique that applies the physical principle of Entropy which states that without external interference, the Entropy which measures the disorder always tends to the maximum. Entropy has a direct relationship to information theory, and in a sense measures the amount of uncertainty in the probability distribution.

This measure provides a probability distribution that is consistent with known constraints expressed in terms of one or more quantities. Let Y be a random variable with pdf f, defined on D y ? R (the real number). The uncertainty concerning Y measured by the Entropy Function is given as Bhavya Jain [09ITMG1020CSE] Page 15 Software Reliability, Modeling and Uncertainty – An Independent Study For example, suppose the prior mean is specified, and among prior distributions with this mean, in the MEP, the distribution which maximizes H( f ) is sought.

The MEP under the case where there is no other partial information leads to the distribution of “most uncertainty”, which for certain discrete cases results in the non-informative prior. With partial prior information available, we then consider this information in the form of restrictions on the prior, hence helping us shape the prior. This partial information can come in the form of both subjective and objective information, e. g. , subjective information (such as expert’s prior opinion that the lifetime is exponentially distributed), and objective information (such as historical data enabling some calculation of the moments). .. 3 Extract Data From MEP To combine the MEP with the BA for the uncertainty analysis in software reliability, it is important to extract data from the experts and history using the MEP and then input them into the prior distributions of the BA. The goal of MEP is to incorporate all available information, outside of which it is desired to assume nothing about what is unknown (Berger et al, 1996). In MEP, the probability distribution represents information, not just frequencies. Below we describe several ways to extract data for the MEP for both discrete and continuous distribution.

Bhavya Jain [09ITMG1020CSE] Page 16 Software Reliability, Modeling and Uncertainty – An Independent Study 3.. 3. 1 Discrete distribution Continuous distribution Bhavya Jain [09ITMG1020CSE] Page 17 Software Reliability, Modeling and Uncertainty – An Independent Study Bhavya Jain [09ITMG1020CSE] Page 18 Software Reliability, Modeling and Uncertainty – An Independent Study 3.. 4 Monte Carlo Approach for System Uncertainty The Monte Carlo simulation becomes a practical way to make the uncertainty analysis of the complicated system tractable.

Algorithm 1 provides a general Monte Carlo approach for the uncertainty analysis in complicated system. Algorithm 1: Monte Carlo approach Bhavya Jain [09ITMG1020CSE] Page 19 Software Reliability, Modeling and Uncertainty – An Independent Study Using the above algorithm of Monte Carlo (MC) simulation, the uncertainty of the system reliability can be analyzed, e. g. the mean and confidence intervals can be approximated by the average value and percentiles, respectively. The above approach is widely applicable. Bhavya Jain [09ITMG1020CSE] Page 20

Software Reliability, Modeling and Uncertainty – An Independent Study 4. Case Study : Markov Model An example of modular software is illustrated here to show the Monte Carlo Simulation for uncertainty analysis of the software reliability with multiple modules. This example is based on a simple Markov model. Suppose the software contains three modules: two parallel modules to fulfill the same function and another module to handle the switch between the two parallel modules. The two parallel modules are running to finish a same task respectively with the failure rate of ? If a module fails, the switching module will work and transfer the workload of the failed module to the other module. A coverage factor c is used to denote the probability that the switching action is successful. If not successful, the software fails, denoted as the imperfect coverage. Otherwise, the software is still running, while the other module failure will make it failed, or the restart of the failed module will bring the software back to the original. Also, suppose the time to restart a module is exponentially distributed with the parameter ? Thus, the parameters for the two modules are failure rate ? and restart rate ? , and the parameter for the switching module is the switching success probability c. The software reliability can be derived from a Markov model that easily combines all the three modules together. The CTMC is depicted by Figure where state 1 is down and system is up in state 2 (one module works) and 3 (two modules work). The software initially begins at state 3. If either one of the two modules fails with the rate 2? , it leaves State 3 to State 1 with the probability (1? ) due to the switching failure, and to state 2 with the probability of c for successful switching. At State 2, it can enter state 1 if the remaining module fails with the failure rate ? , while it can return to State 3 with the repair rate ? to recover the failed module. Bhavya Jain [09ITMG1020CSE] Page 21 Software Reliability, Modeling and Uncertainty – An Independent Study In this example, we assume these parameters (? , ? and c) are independent. Suppose that the distributions of the three parameters have been derived rom the BA plus MEP. We apply the Monte Carlo approach given to simulate 1000 sample points of system reliability at each of the 5000 time points, and the final results of software reliability and uncertainty analysis (including sample average, 5% and 95% quantiles) are shown in Figure 6. 8. Then, the analytic method is implemented for this problem and the uncertainty analysis result with the mean value is also plotted in Figure 6. 8, where ‘MC’ denotes ‘Monte Carlo’ and ‘AM’ means ‘Analytic Method’. Bhavya Jain [09ITMG1020CSE]

Page 22 Software Reliability, Modeling and Uncertainty – An Independent Study In Figure 6. 8, we find that during the initial period of the reliability prediction, the confidence interval is small indicating that the uncertainty of the software reliability is low. Then, the confidence interval increases and reaches the maximum around the middle part. At the latter part, the confidence interval becomes small again but the mean value of system reliability is also small so that the comparative uncertainty is still large.

We also observe from those curves that the sample average from the Monte Carlo Simulation is very close to the mean calculated by the Analytic Method. Bhavya Jain [09ITMG1020CSE] Page 23 Software Reliability, Modeling and Uncertainty – An Independent Study CONCLUSION We the uncertainty problems in reliability modeling at both component-level and system-level. It not only addressed the uncertainty problem using the Bayesian Approach (BA), but more importantly solved the challenges for the dearth of data by embedding the Maximum-Entropy Principle (MEP) into the BA.

By using MEP with BA, expert knowledge, historical data from similar experiments and developmental environments could thus be incorporated in analysing the uncertainty and used for compensating insufficient failure data. After exploring the uncertainty for software component, this chapter further extended the uncertainty analysis to more complicated systems that contain numerous components, each with its own respective distributions and uncertain parameters. A Monte Carlo approach was proposed to solve it.

This method is broadly applicable for many systems that can be modeled with different modeling tools. The approach was then illustrated with a case of three-module software on a Markov model. These examples with distinguished characteristics exhibited the generality and effectiveness of the MC approach to analyze not only simple module-based systems but also complicated systems with numerous uncertain parameters. Bhavya Jain [09ITMG1020CSE] Page 24 Software Reliability, Modeling and Uncertainty – An Independent Study BIBLIOGRAPHY ? ? Mr.

Bablu Pandey, Assistant Professor, CSE & IT Dept. , ITM University Uncertainty Analysis in Software reliability model by Bayesian Approach and Maximum Entropy Principle, Yuan-Shun Dai, Member, IEEE with his fellow members and student members Computer System Reliability Modeling, Analysis and Optimization, Long Quan, B. Eng. , USTC http://www. cs. fit. edu/~vramamoo/publications/abosaq_mmr2007. pdf http://www. cse. cuhk. edu. hk/~lyu/book/reliability/pdf/Chap_3. pdf http://www. csee. wvu. edu/~katerina/Papers/ISSRE-2003. pdf ? ? ? ? Bhavya Jain [09ITMG1020CSE] Page 25