Estimation of Defects Based On Defect Decay Model ED3M

Abstract

An accurate prediction of the number of defects in a software product during system testing contributes not only to the management of the system testing process but also to the estimation of the product’s required maintenance. Here, a new approach, called Estimation of Defects based on Defect Decay Model (ED3M) is presented that computes an estimate the defects in an ongoing testing process. ED3M is based on estimation theory. Unlike many existing approaches, the technique presented here does not depend on historical data from previous projects or any assumptions about the requirements and/or testers’ productivity. It is a completely automated approach that relies only on the data collected during an ongoing testing process. This is a key advantage of the ED3M approach as it makes it widely applicable in different testing environments. Here, the ED3M approach has been evaluated using five data sets from large industrial projects and two data sets from the literature. In addition, a performance analysis has been conducted using simulated data sets to explore its behavior using different models for the input data. The results are very promising; they indicate the ED3M approach provides accurate estimates with as fast or better convergence time in comparison to well-known alternative techniques, while only using defect data as the input.  Estimation of Defects Based On Defect Decay Model ED3M

HARDWARE SPECIFICATION:
  •   Processor     :        Pentium-III
  •   Speed                  :         1.1GHz
  •   RAM                   :         512MB
  •  Hard Disk  :         40GB
  •  General             :        KeyBoard, Monitor , Mouse
SOFTWARE SPECIFICATION:
  • Operating System   : Windows XP
  • Software                 : VS .NET 2005,C#
  • Back End                        :sql server     
Existing System:

Several researchers have investigated the behavior of defect density based on module size. One group of researchers has found that larger modules have lower defect density. Two of the reasons provided for their findings are the smaller number of links between modules and that larger modules are developed with more care. The second group has suggested that there is an optimal module size for which the defect density is minimal. Their results have shown that defect density depicts a U-shaped behavior against module size. Still others  have reported that smaller modules enjoy lower defect density, exploiting the famous divide and conquer rule. Another line of studies has been based on the HAIDER ET AL.: ESTIMATION OF DEFECTS BASED ON DEFECT DECAY MODEL: ED3M 349 . Convergence statistics, collected from the simulation of 100 data sets generated from the Triple-Linear behavior, of the estimator with (A)10 percent tolerance, (b)20 percent tolerance, and (c)            30 percent tolerance. Convergence statistics, collected from the simulation of 100 data sets generated from the Multiexponential behavior, of the estimator with: 10 percent tolerance, (b) 20 percent tolerance, and (c)           30 percent tolerance. use of design metrics to predict fault-prone modules. Briand et al. have studied the degree of accuracy of capture-recapture models, proposed by biologists, to predict the number of remaining defects during inspection using actual inspection data. They have also studied the impact of the number of inspectors and the total number of defects on the accuracy of the estimators based on relevant capturerecapture models. Ostrand et al. Bell et al. have developed a model to predict which files will contain the most faults in the next release based on the structure of each file, as well as fault and modification history from the previous release.

Proposed System:

Many researchers have addressed this important problem with varying end goals and have proposed estimation techniques to compute the total number of defects. A group of researchers focuses on finding error-prone modules based on the size of the module. Briand et al.  predict the number of remaining defects during inspection using actual inspection data, whereas Ostrand et al predict which files will contain the most faults in the next release. Zhang and Mockus use data collected from previous projects to estimate the number of defects in a new project. However, these data sets are not always available or, even if they are, may lead to inaccurate estimates. For example, Zhang and Mockus use a naïve  method based only on the size of the product to select similar projects while ignoring many other critical factors such as project type, complexity, etc. Another alternative that appears to produce very accurate estimates is based on the use of Bayesian Belief Networks (BBNs) .However, these techniques require the use of additional information, such as expert knowledge and empirical data, that are not necessarily collected by most software development companies. Software reliability growth models  (SRGMs) are also used to estimate the total number of defects to measure software reliability. Although they can be used to indicate the status of the testing process, some have slow convergence while others have limited application as they may require more input data or initial values that are selected by experts.

Related Post