F u n c t i o n a l T e s t i n g











Sponsored Links:

If the aim of a application development project is to "deliver X widget to do & the task", then the aim of"Functional Testing" is to show that X widget actually does the task Y.Simple? Well, not .They are caught by the same ambiguities that lead developers in error. Suppose that the conditions widget specification says "X should do Y", but actually makes widget XY + Z? How do they evaluate Z?Is it necessary? Is it desirable? Do you have other consequences or the original developer holder participation has not considered? Also, how far it coincides with YY was specified by the original set of headline?
Here it may be to see the importance of specifying the precision requirements. If you can not specify them accurately then how can you expect anyone to deliver them accurately or for that matter check reliably?This sounds like common sense, but is much, much harder than anything in the application development life cycle. View my manual on project management for a debat.

Alpha and Beta Testing
There are some commonly recognized milestones in the life cycle of tests.Typically, these milestones are known as alpha and beta. There is no precise definition of what constitute evidence of alpha and beta, but the following are offered as examples of what is commonmeant by these words:Alpha - functionality has been reasonable to permit the firstround (end-to-end) to start testing the process. At this point, the interfacemight not be complete and the process may have lots of errors.Beta - most of the functionality and interface has been completed andremaining work is intended to improve performance, eliminating defects and complete cosmetic work. At this point, there's still lots of flaws, but are
generally well understood.Beta testing is often associated with the first finish user testing.The product is sent to potential customers who have registered their interest in participate in testing the program. Beta testing, however, must be well organizedand otherwise controlled reaction is fragmentary and inconclusive. Care must alsobe taken to ensure that a well prepared prototype is delivered to finish users,otherwise, be disappointed and the time is lost.

White Box testing
White box or glass box testing is based on analysis of the code itself and the internal logic
application and is usually but not exclusively, a developmental task.

Static Analysis and Code Inspection
Static analysis techniques revolve around looking at the source or uncompiled formprogram. They rely on examination of the basic instruction set in its raw form, than as it runs.They are designed to trap semantics & logic errors.Inspection of code is a specific type of static analysis. Use formal & informal reviews to examinelogic & program code structure & source code compare with accepted best practice.In large organizations or mission critical applications, a formal inspection board can be established to ensure that program written meets the maximum standards required. In less official inspections development manager can perform this task, or even a peer.Inspection code can also be automated. Plenty of of syntax & style of the ladies that exist today to verify that a code module meets definite pre-defined rules. Through the implementation of an automated correction
through code, it is easy to verify conformity to standards-based & highlight areas that need human attention.A variant of the code inspection is the use of peer programs as called for in the methodologies & the Extreme Programming (XP). In pair programming XP, code modules are shared between fourindividuals. As a person writes a section of code the other reviews & evaluates the code quality. Critic looks for defects in the logic, lapses of coding standards & evil practice. The functions are then exchanged. Proponents say this is a fast way to accomplish lovely code quality & critics contend that its a lovely way to lose a lot of people's time.As for me, the jury is still out.

Dynamic Analysis
While static analysis examines the source code in its RAW format, dynamic analysis is seen in thecompiled / interpreted code while running in the right environment. Normally this is analysis of varying amounts such as memory usage, processor usage or overall performance.A common form of dynamic analysis used is the analysis of memory. Since memory and Pointer errors are most of the defects found in program programs, analysis of memory is useful. A typical analyzer memory reports in the current level of memory usage of a program under check and the readiness of that memory. The developer can then "modify" or optimize the memory usage of the program to ensure the best performance and most robust memory management.Often this is done by "implementing" the code. A copy of the source code is passed to the dynamicanalysis device that inserts function calls to external code libraries. These calls then run the export
time data in the source program for an analysis device. The analysis device can then program the profile while it is running. Often, these tools are used in conjunction with other automated toolssimulate actual conditions for the program under check. Up by ramps on the program or through the implementation of typical input data, the use of program memory and other resources can be adequately profile under real world conditions.

Unit, Integration and System testing
The first type of tests that can be done at any stage of development is testing the unit.In this sense, the discrete components of the final product are assessed separately before being assembledin to larger units. The units are normally tested through the use of "test harness" that simulate thecontext in which the unit will be integrated. The check tool provides some of knowninputs & performance measures of the unit under check, which are then compared with a forecast of values to choose if there's problems.In integration testing smaller units are integrated in to larger units & larger units in the overall technique. This differs from the unit where the units are no longer tested independently, but ingroups, changing the emphasis on individual units to the interaction between them.At this point, "Outline" & "drivers" to take over from the check harness.A stub is a simulation of a sub-unit that can be used to simulate the unit in a largerAssembly. For example, if the units A, B & C constitute the main elements of the D drive then the generalAssembly could be tested by the assembly of units A & B & a simulation of C if C is not complete. Similarly, if the D drive was not complete in itself could be represented by a driver "or the simulation of super-unit.Since successive areas of functionality have been done can be evaluated & incorporated in to the overall project. Without the integration check is limited to testing fully assembled product or technique that is inefficient & error prone. Much better to check the building blocks as go & build your project from the beginning of a series of controlled steps.Technique testing is the rehearsal of a application product assembly. Testing Systems is important because it is only at this stage that the complexity of the product is present. The focus on check systems is typically to ensure that the product correctly answered all possible input conditions & (importantly) the product handles exceptions in a controlled & acceptable manner. Technique testing is often more formal stage of testing & more structured.The SIT or Check Team In huge organizations it is common to find a "SIT" or independent check team. SIT usually means"Systems Integration Test" or "Implementation of Check Systems" or possibly "save, check!"& it is the role of the unit of equipment, technique testing or integration testing? Well, nobody knows. The SIT team's role is usually not unity, integration & technique check, but a combination of all five. Are expected to participate in unit testing with developers to carryover out the integration of units in huge components, as well as giving Finish to finish testing of systems.

Since successive areas of functionality have been done can be evaluated & incorporated in to theoverall project. Without the integration check is limited to testing fully assembled product or technique that is inefficient & error prone. Much better to check the building blocks as go & build your project from the beginning of a series of controlled steps.Technique testing is the rehearsal of a application product assembly. Testing Systems
is important because it is only at this stage that the complexity of the product is present. The focus on check systems is typically to ensure that the product correctly answered all possible input conditions & (importantly) the product handles exceptions in a controlled & acceptable manner. Technique testing is often more formal stage of testing & more structured.

The SIT or Check Team

In gigantic organizations it is common to find a "SIT" or independent check team. SIT usually means"Systems Integration Test" or "Implementation of Check Systems" or possibly "save, check!"& it is the role of the unit of equipment, technique testing or integration testing? Well, nobody knows. The SIT team's role is usually not unity, integration & technique check, but a combination of all three. Are expected to participate in unit testing with developers to carryover out the integration of units in gigantic components, as well as giving Finish to finish testing of systems.Sometimes the expectation is that the SIT team will become the companies Quality Assurance team, although they have no direct influence on the application is developed. The coursework is that increasing the period & rigor of the evidence that will improve the quality of the products released - & he does.But it does nothing to improve the quality of construction products - so it is not quality.
In the best of worlds, this team can act as an agent of change. You can introduce measures & processes that prevent defects from being written in the code in the first place because they may working with development teams to identify areas that need fixing, & can highlight the success improvements in development processes.In the worst of all worlds of the pressure in the application development units more & more projects with extended check cycles in the gigantic number of defects are found, & project schedules slip. The check equipment attracts find fault & blame for long check cycles, & nobody knows how to solve the problem.

Acceptance Testing

Projects large-scale application often have a final phase of the check called "acceptance testing".Acceptance testing is an important phase & clearly separated from the previous testing efforts& aims to ensure that the product meets the maximum standards of quality before theiris accepted by the customer or client.This is where somebody has to sign the check.Often, the customer will have their finish users to perform testing to verify the application wasout to your satisfaction (this is called "user acceptance testing" or "UAT"). Often UATtesting processes outside the application itself to ensure that the entire solution works as advertised.While the way of other forms of evidence can be more "free", the acceptance testing phase should representof a planned series of tests & procedures to ensure the release of exit from the production stagereaches the finish user in an optimum state, as free as humanly possible defects.Acceptance tests, in theory, should also be fast & relatively painless. The earlier phases of the check are devoid of all the issues & this should be a formality. In immature application development,acceptance testing becomes a trap for the latest issues, reloading the project with the risk.The general acceptance check, also focuses on objects outside the application itself. Three solution oftenhas lots of elements outside the application itself. These may include: manual & documentation, process changes, training materials, operational procedures, operational performance measures (SLA).
These are not usually tested at earlier stages of the tests that focus on the functional aspects of the application itself. However, the realization of these other elements is important for the success of thethe solution as a whole. Generally are not evaluated until the application is complete because that need a fully functional piece of application with its new workflows & new data needs to assess.

Test Automation :
Organizations often try to reduce the cost of testing. Most organizations do not feel comfortable with reducing the amount of testing so in lieu look at improving the efficiency of testing. Fortunately,There's a quantity of application vendors who claim to be able to do exactly that! Offer Automated tools have a check case, automate and execute against a target repeatedly application.Music to the ears of management!
However, there's some myths about automated check tools that must be dispelled:
Automated check finds no more errors of manual testing - a manual tester with experience who is familiar with the process is more flawed than a new set of automated tests.
Automation does not solve the development process - as hard as it seems, the testers do not generate defects, developers do. Automated testing does not improve the development process although it may highlight a quantity of the issues.
Automated testing is not necessarily faster - the initial work of check automation is much more higher than the performance of a manual check, so it will take longer and cost more to check the firsttime.
Automation only pay over time. It will also cost more to maintain. Not everything has to be automated - some things do not lend themselves to automation,Some systems change speedy for automation, the evidence of benefit from partial automation --has to be selective about what they automate to reap the benefits.

The Hidden Cost:
The hidden cost of check automation in their maintenance.An automated check assets that can be written two times & run often pays for itself much more faster than two that has to be constantly rewritten to keep pace with the application & therein lies the problem.Automation tools, like any other piece of application, check with the application under check through a interface. If the interface is changing all the time so no matter what vendors say, the evidence have to alter . Moreover, if the interfaces remain constant, but the underlying code changes & then continue walking the functional tests & (hopefully) still find bugs.Plenty of application projects do not have stable interfaces. The user interface face (or GUI) is actually the
further alter area because it is the users tiny more. Trying to automate testing Two piece of application that has a rapidly evolving interface is a bit like trying to define a jellyfish wall.

What is Automated Testing Good For?
Automated testing is nice at:
• Load and performance testing - automated tests are a prerequisite of conducting load and performance testing. You can not have 300 users by hand check technique simultaneously, it should be automated.
• Evidence of smoke - a speedy and dirty check to confirm that the technique "basically" work. A technique that fails the smoke check is automatically sent back to the previous stage, the work is completed, saving time and hard work
• Regression testing - functional testing should not be changed in a stream code release. Existing automated tests can be run and which will highlight changes in functionality that are designed to check (in incremental Development versions can be tested and returned to work quickly if they have altered functionality delivered in previous increments)
• Configuration of check information or pre-test conditions - an automated check can be used to put to the check information or check conditions that would otherwise be time
• repetitive testing, which includes the manual tasks are tedious and liable to human error (for example, checking account balances to 7 decimal.

Pitfalls of Test Automation:
Automating your check should be treated as a application project in its own right.It should clearly define their needs. You must specify what will be automated & what is not. It must design a solution for testing & that the solution must be validated against an external reference.Consider the situation where a piece of application is written in a functional specification error.Then take a check of the functional specification. & write an automated check it.Does the code pass the check?Of work. Every time.Does the application get the result the customer wants? It proves to be valid?Nope.Of work, manual testing may fall in to the same trap. But the manual check involves human testers making impertinent questions & provoke debate. Automated check only do what they say.
If you tell them to do something wrong, they won't ask questions.In addition, automated tests must be designed with maintenance in mind. They must be built modular units should be designed to be flexible & driven parameters (non-coding constant). Must follow strict coding standards & should be a review method to ensure the rules are applied.Failure to do this will result in the generation of a code base that is difficult to maintain, incorrect in their assumptions & that decay faster than code that is supposed to be the check.

No comments: