N o n - f u n c t i o n a l T e s t i n g

Sponsored Links:

Testing the design:
Requirements, design & technical specifications can be tested in their own right.The purpose of the evaluation of a specification is threefold:
• To ensure accuracy, clarity & internal consistency (verification)
• Evaluate how well it reflects reality & hopes that the finish user (validation)
• Make sure it is compatible with all upstream & downstream of the proposed The technical specification is an embodiment of the conditions then must flow throughat later stages such as production & testing. If the requirements are not well specified below, noonly the product is insufficient, but also be astoundingly difficult to verify.If the technical specification is out of tune with the requirements, then it is likely that the development team will be well on its way of producing the wrong product. Because these documents are often produced in parallel (ie, the steps in the cascade overlap model) is common discrepancies to creep Every assertion in a specification should be reviewed with a list of desirable attributes:
• Specific - is essential to eliminate uncertainty as possible, as soon as possible.Words such as "likely," "maybe" or "may" indicate indecision on the part of the author & therefore the ambiguity. Requirements, including these words should be removed or re-written to provide some sort of guarantee that the desired result.
• Measurable - a requirement that use comparative words such as "better" or "faster"You must specify a quantitative or qualitative improvements to do with a specific value (100% faster or 20% more accurate).
• check - according to the above, a requirement should be specified with an idea how to be evaluated. A requirement that is not verifiable is not ultimately"demonstrable" & therefore can not be confirmed, either positively or negatively.
• All right, if three requirement contradicts another, the contradiction must be resolved.Often, the division of a requirement in to its component parts helps discover incompatible cases in each, which can then be clarified.
• Delete - requirements should be simple, clear & concise. Requirements consist of breath long prison sentences or several clauses involving multiple possible results & ambiguously. Split in to individual statements.
• Exclusive - Registration form must not only what will be done, but what is not explicitly done. Take unspecified leave something to the assumptions.It is also important to differentiate the requirements of design documents. Requirements should not talking about "how to" do something & design specification should not talk about "why" of doing things.

Usability Testing:
Usability testing is the method of observing the reactions of users to a product and fine-tune the design of to meet their needs. Promotion usability testing known as "focus groups" and while the two differ in purpose of lots of of the principles and processes are the same.In usability testing a base model or prototype of the product is put in front of evaluators who are representative of typical finish users. Then he established a series of standard tasks to be complete using the product. Any difficulties or obstacles that are then indicated by a host or observers and design changes to correct the product. The method is then repeated with the new design to assess these changes.There's some important principles of usability testing is to be understood:
• Users are not testers, engineers and designers - is asking users to make design decisions about the application. Users do not have broad technical knowledge to make decisions that are fair to all. However, seeking their the development team of the view can select the best of several solutions.
• You are testing the product and not the users - all often developers believe it is a User problem' when there's problems with an interface or design element. Users must be able to "learn" how to use the application if taught properly! Perhaps if The application is designed properly, won't have to learn at all?
• Selection of reviewers is crucial end-user "You must select evaluators who are directly representative of finish users. Do not select any of the street, do not use management and technicians do not use unless your target audience.
• Usability testing is a design device - Usability testing should be performed early in the life cycle it is easy to implement changes suggested in the evidence. Leaving only later will be difficult to implement changes.A misconception about the usability studies is that a large number of assessor is necessary to a study. Research has shown that no over two or one evaluators could be necessary. Beyond that number the amount of new discoveries decreases rapidly and each additional evaluator offers tiny or nothing new.And one is usually convincing .If the one evaluators, have the same problem with the application, it is likely that the problem is themselves or with the application? With two or two evaluators that may be attributed to personal whims.With one is beyond a shadow of doubt.The proper way to select evaluators is the profile of a typical end-user and then apply for services those who best meet that profile. A profile should include factors such as age, experience,gender, education, previous training expertise.I like to see developers who participate as observers in usability studies. As a developer of first.I know that the arrogance that goes along with design application. In the method of creation is
difficult for you to imagine someone else, much less user (!), could offer a better contribution to the design method of your automobile well paid, highly educated.Usually developers sit through the completion of the first evaluator and silently laugh themselves, attribute in the problems to 'finger trouble' or the inability of users. After the second evaluator faces the same problems that the comments become less frequent and when the third user encountered in the same position to be calm. In the fourth user, which looks concerned on their faces and in the fifth step are scratching the glass trying to get to speak with the user to "know how to fix the problem."Other issues that must be considered when conducting a usability study are ethical considerations. Since your are human beings in what is essentially a scientific study need to consider carefully how they are treated. The host should take the trouble to put them at ease,both to help them to remain objective and remove any tension in the artificial environment of a usability study can generate. You may not realize how traumatic it can be using their application to the average user!That separates them from the observers is and a lovely idea, because nobody's completed well with a multitude looking over her shoulder. This can be completed with a one-way mirror or putting the users in another room at the finish of a video monitor. You ought to also consider their legal rights and make Make sure you have permission to use the materials collected during the study in more presentations or reports. Finally, confidentiality is important in these situations and usually is common to ask people to sign a disclosure agreement (NDA).

Performance Testing:
An important aspect of modern application systems is their performance in multi-user or multi-level environments. To check the performance of the application you need to simulate deployment environment and simulate traffic going to get when in use - this can be difficult.The most obvious way of accurately simulating the deployment environment is to basically use the live environment to check the system. This can be pricey and potentially hazardous, but provides the best possible confidence in the system. It may be impossible in situations where deployment system is constantly in use or for other mission critical business applications.If possible, however, live system testing provides a level of confidence is not possible in other approaches. Testing on the live system takes in to account all the particularities of this system without the need to try to replicate that in a check system.It is also common to use capture and playback tools (automated testing). A capture tool is used to record the actions of a typical user to perform a typical task in the system. A player tool is then used to reproduce the action in time of multiple users simultaneously. The multi-user reproduction provides an accurate simulation of the stress of real-world system will come under.The use of tools capture and playback will be used with caution, however. The mere repetition of the exactly the same set of system actions can not constitute adequate proof. Significant amounts of randomisation and variation must be submitted to properly simulate real-world use.It is also necessary to understand the technical architecture of the system. insist on the weakness points, bottlenecks in performance, then their analysis does not prove anything. You need to design Specific tests are performance issues.Having a baseline is also important.Without knowledge of the "pre-change implementation" of application is impossible to assess the impact of any change in performance. "The system can only handle 100 transactions per hour!"comes the cry. But if you only need to handle 50 transactions per hour, is this a problem?Performance testing is difficult, technical, business. The problems that cause performance bottlenecks are often obscure and buried in the code of a database or network. Digging them out requires concerted hard work and directed, disciplined analysis software.

No comments: