Stop Testing Process

Sponsored Links:

This can be difficult to choose. Plenty of modern application applications are so complex and run in the environment as interdependent, that the evidence can not be complete. "When to stop testing" is six of most difficult questions of a check engineer. Common factors in deciding when to stop are:

Deadlines (release deadlines, testing deadlines.)
1. Check cases completed with sure percentage passed
2. Check budget depleted
3. Coverage of code / functionality / requirements reaches a specific point
4. The rate at which errors can be found is small
5. Beta or alpha testing period ends
6. The risk in the project is in the acceptable limit.

In practice, they believe that the decision to stop testing based on risk level acceptable to management. Since testing is a technique that never ends they can never assume that 100% testing is completed, they can only minimize the risk of sending the product to the customer with an X testing completed. The risk can be measured through analysis of risk but for small period / low budget or low-income project, the risk can be deduced basically: -

1. Determination of check coverage.
2. Number of check cycles.
3. Number of errors a high priority.

The Application Assurance Technology Center (SATC) on the reliability of systems and the security office at the Goddard Space Flight Center (GSFC) is inquiring in to the use of application error information as an indicator of the state check. Articles of interest to choose the status of exhibit include projections of the number of left over errors in the application and the expected amount of time to find a percentage of the residual errors. To project the number of remaining errors in the application you need an estimate of the total number of errors in the application at the beginning of the tests and six count of errors found and fixed during the check. There's a measure of models that reasonably fit the rate at which errors are found in the application, the most commonly used is referred to herein as the Musa model. This model is not easily applicable in GSFC, however, due to the availability and quality of information error.At GSFC, useful information error is not readily obtainable, not to projects in the Application Engineering Laboratory. Of the projects studied by the SATC, only a few had organized an accounting method for tracking errors, but often do not have a consistent format for the error log. Some projects recorded errors were found, but not record any information about the resources applied to testing. The error information time and again contained the date of information entry error in lieu of the actual date of discovery of error. To use the traditional models, such as the Musa model to estimate the cumulative number of errors, you need reasonably accurate information on the time of the discovery of errors and the level of resources applied to testing. Real world application projects are generally not obliging when it comes to accuracy or completeness of the information error. The models developed by the SATC for predicting trends and information in the error of trying to compensate for these deficiencies in the quantity and availability of project information.
To compensate for the quality of the information error, the SATC developed a application error trend models using six techniques, each based on Musa's basic model, but with the constant in the exponential term replaced by a function of while describing the "intensity" of evidence. The form and parameters for this function can be estimated through measures such as CPU time or staff hours spent on the tests. The first method involves adjusting the information cumulative error for Musa modified model with a least-squares fit based on gradient methods. This method requires information on the errors found and the number of staff hours dedicated to every week testing of the activity check . The second method uses a Kalman filter to estimate both the total number of errors in the application and the number of tests being performed. This method requires that the error information and initial estimates of the total number of errors and the initial amount of hard work applied under the microscope.The SATC has now reviewed and modeled information error of a limited number of projects. Generally, only the date on which a mistake was recorded in the tracking error of the method was obtainable, not the date of discovery of the error. No useful information were obtainable human or computer resources apportioned to the tests. What is needed for the more accurate is the total time for the check, even if times are approximate. Using the reported amount of time to find / correct the individual errors produced no reasonable correlation with the resource necessary function. Some indirect attempts to estimate resource use, however, led some fits well. For a mistake is reported on the project with the name of the person who found the error. Resource use for the check is calculated as follows: Six person was estimated at work on the testing hard work for a period beginning with the first error was reported and ending with the last error was reported. The percentage of time each person worked during this period is supposed to be an unknown constant which differ from person to person. Using this method led to a curve similar to the resources of the Rayleigh curve
In most projects, there was conformity between the nice trend model and the information reported error. More importantly, the estimated total number of errors and error parameter discovery, made early in the testing activity, appeared to provide reliable indicators of total number of errors actually found and the time it took to find errors in the future. Figure 2 shows the relationship between the errors reported and the SATC trend model for a project. The graph represents the information obtainable at the conclusion of the project. This close match was also found in other projects when the information was insufficient.In another project, the different estimates of total number of errors were obtained when estimates were made on different time intervals tested. That is, there was inconsistent agreement between the trend model and the error information in different time intervals. Through further discussion with the project director learned that the error rate information for the project went from approximately 100% during integration testing and 40% during the acceptance tests. There was as well as a significant amount of rework of code, and application testing involved a sequential strategy of testing a single functional area thoroughly tested before proceeding to the next functional area code. Thus, the instability of the estimates of total errors is a useful indicator of the fact that there was a significant change in either the testing and reporting project. Figure 3 shows the results for this project. note the change in the slope of the figure reported for any errors that occur around 150 days. The information of the curve flattens at the right finish of the curve, due to a break in rehearsals, than a lack of error detection. This project is still in beta testing.

If the error information is divided in to different check phases of the life cycle (eg, unit, method integration), the error curve designed using the SATC model best fits the rate at which errors are found in each phase.
Some points need clarification on the SATC trend model error. The formulation of the equation of SATC is a direct result of assuming that at any instant in time, the rate of discovery of the errors is proportional to the number of remaining errors in the application and the resources applied to the search for errors. Other conditions necessary for the SATC model tends to be a valid are:

1. The code being tested is not changed substantially during the testing method,
through the addition or the resumption of massive amounts of code.
2. All errors detected are reported.
3. All application is tested, and testing of application is uniform across the
moment of the check.

No comments: