A test strategy is a system that describes the test portion of the application development cycle. It is created to tell project managers, testers & developers on some key issues in the testing system. This includes the aim of testing, new methods of control functions, the total time & resources needed for the project, & the testing surroundings. In the testing strategy describes how the risks of stake holder products are mitigated in the levels of proof, the kinds of tests are performed in the testing levels, & that the entry & exit of the application criteria.The test strategy is created based on design development documents. The design document is the main system used &, sometimes, the conceptual design document can be viewed. The design documents report the application functionality is enabled in the next version. For each set of design development, an appropriate test strategy must be created to test the new feature sets. The purpose of a test strategy is to clarify the main tasks & challenges of the test project. Test Test Architecture approach & are other terms used to report what I am calling the test strategy.
Example of a badly said (& probably ill-conceived) the testing strategy:
"We will use black box testing, cause-effect charts, testing limits, & white box testing to test this product against its specification."
Making a test strategy
The test strategy is a formal description of how a application product will be tested. A test strategy is developed for all levels of testing, as needed. The test team analyzes the requirements, writes the test strategy & reviews the plan with the project team. The test plan may include test cases, the conditions, the test surroundings, a list of related tasks, pass / fail criteria & risk assessment.
Tickets for this system:
* A description of the hardware & application parts, including test tools. This information comes from the test surroundings, including test tool information.
* A description of roles & obligations of the resources necessary for the test & schedule constraints. This information comes from man-hours & schedules.
* Proof of the methodology. This is based on known standards.
* Functional & technical requirements of the application. This information comes from the requirements, design documents, modify request, technical & functional.
* Requirements that the system can not provide, eg system limitations.
Proceeds from this system:
* An approved & signed the test strategy, test plan, including test cases.
* Testing issues to resolve. Usually, this requires additional negotiation at the level of project management.
Defining a test strategy
A solid testing strategy provides the framework needed to implement testing methodology. A strategy ought to be developed separately for each system being developed, taking in to account the development methodology used & the application specific architecture.The heart of any strategy to test is the main strategy document. It aggregates all the information requirements, system design & acceptance criteria in a detailed plan for testing. A detailed master strategy ought to cover the following:
Ratification of the business aim of the application & define the scope of testing. The statement ought to be a list of activities that reach or out of scope. A list of examples include:
* List of application to be tested
* Application configurations to be tested
* Documentation to be validated
* Hardware for testing
The system under test ought to be measured by its compliance with the requirements & criteria for user acceptance. Each requirement & acceptance criteria must be assigned to specific test designs to validate & measure the expected results for each test being performed. The objectives ought to be listed in order of importance & weighted for risk.
Features & functions to be tested
All features & the function must be registered for inclusion or exclusion of facts, along with a description of the exceptions. Some features may not be verifiable due to lack of equipment or lack of control, etc. The list ought to be grouped by functional area to add clarity.
The following is a basic list of functional areas:
* Backup & recovery
* Interface Design
* Procedures (users, operating, installation)
* Requirements & design
* Handling errors
* System exceptions & errors of third party applications
The approach provides the detail necessary to report the levels & types of tests. The base V-model shows what types of tests are needed to validate the system.
More specific test types include functionality, performance testing, backup & recovery, security testing, environmental testing, conversion testing, usability testing, installation & regression testing. The specific testing methodology ought to be described & the entry / exit criteria for each phase indicated in a matrix phase. A project plan that lists the resources & timing for each test cycle must be created that maps the task of specific tests for the general development plan of the project.
Testing System & Procedures
The order of test execution & the steps needed for each type of test ought to be sufficiently detailed to provide clear text & generating test designs & test cases. The procedures must include the amount of test information is created, managed & loaded. Test cycles ought to be planned & scheduled based on system availability & delivery dates of development. All applications & environmental dependencies must be identified together with the procedures for access by all systems.
Each level of the test ought to have a defined set of entry / exit criteria used to validate that all requirements for the validity of the test have been met. All mainstream application testing methodologies are an extensive list of entry / exit criteria & testlist. In addition to the standard list, additional elements ought to be added based on the specific needs of the tests. Some common additions are, environmental availability, information availability & validation of code that is prepared to be tested. Each level of testing ought to define specific pass / fail acceptance criteria, to make sure to make sure that all quality gates have been validated & the test plan focuses on the development of tests that validate the specific criteria defined by the user acceptance plan.
All testing tools ought to be identified & their use, possession & dependencies defined. The section of tools including hand tools such as spreadsheets, templates & documents, as well as automated tools for test management, defect tracking, regression testing & performance / load testing. Any specific skill set must be identified & compared with existing capabilities identified for the project to highlight training needs.
A plan to address the disposition of facts is not necessary to generate lists of procedures that climbing to the correction & retest of the failed tests, along with a mitigation plan to test high risk. Defect tracking ought to include basic indicators for the performance based on the number & type of defect found.
Roles & Obligations
A matrix listing the roles & obligations of all involved in control activities, together with the expected amount of time apportioned to the project, must be prepared.
Post a Comment