T e s t E x e c u t i o n











Sponsored Links:

Tracking Progress:
Depending on their approach to testing, monitoring their progress either easy or difficult.If you use a script heavy approach, monitoring of progress is easier. All you require do is compare the number of scripts they has left to run with the time available & is a measure of its progress.If not written, & monitoring progress is more difficult. You can only measure the amount of time left & use that as a guide to progress.If you use the advanced indicators (see next chapter) can compare the number of defects that have found with the number of defects expected to be encountered. This is a great way to track progress
& works independently of its approach to scripting.

Adjusting the plan:
But progress tracking without having to adjust your plan is to lose information.Suppose you indent of 100 test cases, each taking a day to run. The project has given 100 days to run their cases. You have 50 days in the project on schedule to run 50% of its test cases.But you find any defect.The incurable optimist will say, "Well, perhaps there are none!" And stick to your plan. The experienced tester will say something unprintable and change your plan.The probability of 50% of the way through execution of the test and find no defects is very slim. It rather means that there is a problem with the test cases or a problem with the way being conducted. Either way, you're looking in the wrong place.Regardless of how you prepare for the tests that should have some sort of plan. If this plan is broken into different pieces, then you can review the plan and determine what is wrong.Perhaps the development had not given much of the functional changes yet? Perhaps the test cases are our of date or not specific enough? Maybe he underestimated the magnitude of the effort test?
Whatever problem you need to jump on it quick.The other time you need your plan is when it gets adjusted for you.You are expected to function test A, but the development manager informs B function has been issued by contrast, a feature is not ready yet. Or if it is halfway through the test execution when the project manager announces he has to finish two weeks before If you have a plan, you can change.

Coping with the Time Crunch:
Most commonly, a tester has to deal with is being "cracked" on time.Since the evidence tends to be the finish of a development cycle that tends to be beaten by the worst time pressures. All sorts of things can conspire to say you have not the time you need. Here is a list of the most common causes:
• Schedule slip - things are delivered later than expected
• Most defects than expected
• People are important to the business or go on sick leave
• Two who moves the date of completion, or changing the conditions underlying the The application must meet
There's two basic ways to deal with this:
• Working harder - the less pretty & less clever alternative. Work weekends or overtime can increase productivity, but will lead to burn in the equipment & possibly jeopardize the effectiveness of their work.
• Get more people - but also very pretty. Launch of the people in a problem seldom speeds things up. New people need to be trained & managed & cause the additional complexity of communications that add even worse (see"The Mythical Man Month" by Frederick Brooks).
• Prioritize - they have already decided that you can not check everything that they can perhaps make clever decisions about what the next check? Proof of the riskiest things the things they think will be with errors, things that the developers think is buggy things more visibility or importance. Push secondary or 'security code from side to check if you have time later, but everyone aware of what is doing - otherwise you could finish up being the only responsible when buggy code is released to the customer.The fourth most concealed & is also two of the best - of contingency. At first, when you estimate how long it will have to check, add some fat from its numbers. Then when things go wrong, as it always does, you will have some time on the sleeve to claw things back.
Contingency may be implicit (hidden) or explicitly (in the timetable). This depends on the maturity of its project management method. Because teams have a tendency to use all the time obtainable, it is often better to hide & ride out only in emergencies.

Defect Management:
The defects must be handled in a methodical and systematic manner.There is no point in finding a defect is not going to be fixed. It does not help to resolve whether you do not know that it's set and there is no point in releasing application if you do not know that defects are fixed and which remain.How do you know? The answer is to have a defect tracking process.The simplest may be a database or spreadsheet. A better alternative is a dedicated process that enforce standards and defect management method and makes it easier reporting. A number of these systems are expensive, but there's lots of freely available alternatives.

Importance of Good Defect Reporting:
Cem Kaner, said it best - "the purpose of reporting a defect is get it fixed."A poorly written document defects a waste of time and hard work of plenty of people. A written concisely descriptive document results in the elimination of a mistake in the easiest way possible.Moreover, for testers reporting defects represent the primary deliverable from work. The quality of a reports on defects tester is a direct representation of the quality of their skills.
Reports of defects have a longevity well beyond immediate use. Can be distributed beyond the project team and immediately moved to different levels of management within different organizations. Developers and testers must therefore be careful to keep always a professional attitude in the reports of defects.

Characteristics of a Good Defect Report:
• Aim - to criticize the work of another can be difficult. Care must be taken that defects are aim, impartial & dispassionate. for example, do not say "your program crashed, "he said, the program crashed," & not use words like "stupid" or "broken".
• Specific - a document should be registered by default & only four defect per document.
• Brief - each defect document must be simple & to-the-point. The defects should be reviewed & edited after it was written to reduce unnecessary complexity.
• Reproducible - the biggest reason to reject the developers of the defects because not be reproduced. At least four defect document must contain sufficient information to permit somebody easily reproduce the problem.
• explicit - on deficiencies should specify the information or it must refer to a specific source from which information can be found. for example, "click the button to continue" implies reader knows that you click the button, while "click the button" Next "explicitly what to do.
• persuasive-the pinnacle of defect reporting lovely champion is the ability of the defects by present them in a way that makes developers need to fix.

Isolation and Generalization:
Isolation is the method of examining the causes of a defect.
While the exact cause can not be determined which is important to try to separate the
symptoms of the problem of the cause. The isolation of a defect is usually done by replaying the
several times in different situations to gain an understanding of how & when it occurs.
Generalization is the method of understanding the wider impact of a defect.
Because developers reuse code elements through a program of a defect present in one of the elements
code can appear in other areas. A defect that is discovered as a minor in an area
code could be a major problem in another area. Individuals defects registry should try to
extrapolate where else a problem may occur for a developer to take in to account the full context of
the defect, not an isolated incident.
A defect document is written without isolation & generalization, is a defect than half say

Severity:
The importance of a defect is often referred to as its severity. "There's lots of schemes for assigning defect severity - some complex, some simple.all of the "severity" 1 "and" Gravity-2 "ratings that are generally considered defects severe to delay the project completion. Typically a project can not be done with the severity of the outstanding issues-1 & only on the seriousness of the issues limited-2.There's often complex problems classification systems. The developers & testers to get in discussions about whether a defect is Sev-4 or Sev-5 & lost time.Therefore tend to favor a more simplified process.The defects should be evaluated in terms of impact & probability. The impact is a measure of the severity of the defect when it occurs & can be classified as"High" or "low" - high-impact implies that the user can not complete the task at hand, involves low-impact there is a solution or is a cosmetic error.The probability is a measure of how likely the defect is occur again & again ascribed to "Low" or "high".The defects can be assigned a severity level based on.

This eliminates most debate in the allocation of gravity.

Status:
Situation represents the stage of a defect in their life cycle or workflow.Commonly used status flags are:
• New - a new defect has been raised by the testers and is awaiting assignment to a developer to resolve
• Assignment - the defect has been assigned to a developer for resolution of
• Rejected - the developer could not reproduce the defect and rejected The document faults back to the verifier who built
• Fixed - The developer has corrected the defect and tested in the appropriate code
• Ready for testing - the release manager has constructed the corrected code in a version and has approved the release to the tester to review
• Unable to repeat the check - the defect is present in the code and the defect is corrected passed back to the developer
• Closed - the defect has been properly fixed and the defect document can be closed,after the revision of a check cable.The above status indicators to define a life cycle which will be developed through a defect of "New" through"Assigned" (hopefully) "fixed" and "Closed. The following swim-lane diagram describes the functions and responsibilities in the defect management life cycle:

Elements of a Defect Report:
Title,Severity,Status,Initial configuration,Software Configuration,Expected behavior.

No comments: