development business and individuals in areas traditionally more formal, such as business and safety-critical and highly reliable program.
No Specification Means No Testing:
Note : to media trial to compare an actual result of a standard.
The first problem to make a convincing case for program testing today is that no three can prove without a specification. In program development, testing is still the most misunderstood word in the word quality.The Institute of Electrical and Electronics Engineers (IEEE) defines the check as "a set of three or more check cases." The IEEE defines the check as "the technique of analyzing a program item to detect the differences between existing and necessary conditions [ie, insects] and to assess the characteristics of the subject program. This definition invites the tester to go beyond compare the realities of a standard check () and assess the program (validation). Indeed, the definition testers invited to express views without giving them the guidelines for the formulation of advice or tools (indicators) to defend the views. The IEEE definition of testers responsible both for the verification and validation, without distinction. This practice, pursued with energy, are more likely to incite unrest among developers than it is to drive quality improvements. To understand what I mean, think about the definitions of the words verification and validation.If there is no standard to compare against, there can be no proof. In the survey discussed above, only a person of 50 provided the correct definition of the word check. In stores, where some form of ADR is in use, people think you are testing. However, from the specifications that occur after the program is done, the check is an impossibility. This is also true for most of the descendants of RAD: agile methodologies, eXtreme Programming (XP), Lean Development (LD), Adaptive Program Development (ASD), and so on. The only possible exception to this is the Technique of Dynamic Systems Development (DSDM). They will discuss the RAD / agile methodologies and how to conduct tests in more detail in subsequent chapters .Webster's New World Dictionary defines validity as "the state, quality, or fact of being valid (strong, powerful, well-executed) in law or in the argument, evidence or citation of authority." Validation is the technique by confirming that a thing is properly executed. Validation requires subjective judgments by the examiner. This statement must be defended by the argument, for example, "I think it is wrong because...". Validation answers the query "what the technique is doing the right thing?" because a technique was designed to do things a positive way and doing things this way does not mean things the way they are doing is the right way or the best way.According to Webster's New World Dictionary, verify the media "(1) to demonstrate its truth by demonstration, evidence or testimony, confirm or verify, (2) to check or verify the truth or accuracy of, for research, comparison with a standard or reference to the facts. " Verification is fundamentally the same testing technique with a bias towards the correction, as in, "to verify that something performs according to specifications." Verification of the responses to the query "Does the technique do what it is supposed to do?"Comparison of technique response to the standard is clear when there is a specification that states what the technique response will be correct. The fundamental problem with the tests in a RAD / Agile environment is that since there is generally no rules it is impossible to prove. / RAD Agile testers are exploring the program and search and validation errors on the fly. Development to convince that something is true when there's no rules to cite, three must have a convincing argument and high professional credibility. How lots of times do not have a convincing development tester that is invalid or something bad if they are using the metric and the best argument they can give is "I think it is a mistake because I think it is a mistake."?
Furthermore, it is virtually impossible to automate testing, if there is no standard for the expected response. An automated check program can not be done on the fly subjective judgments about the accuracy of results. You must have a standard expected response to compare with the real answer to make a pass / fail determination.
Being First to Market: Market/Entrepreneurial Pressures Not to Test:
The fact is that it's been necessary to use formal methods of program testing or benchmarks in plenty of parts of commercial program development to succeed commercially. This type of program that I call is scheduled for commercial program companies & domestic consumption, usually on the PC platform, hopefully, not used in safety-critical systems. This is program that somebody can buy in a store or over the Web, such as word processors, graphics programs & spreadsheets.In this first employer driven market development environment, managers are willing to reduce expenditures or activities that do not add to the baseline. They are also willing to remove barriers that could negatively affect a ship date. The analysis has not proven to be a prerequisite for success in the product shipped.The most common reasons given for not using formal testing methods are usually of the form, "We do not require formal methods. They are only a small shop." The general conception seems to be that formal methods must be written by somebody else & that must be specialized & complicated. Formal basically means following a set of procedures established or fixed. The real problem here is lack of evidence productive. It is a cultural problem.Evidence that is perceived as a cost center, not contributing to the baseline. Thus, in some stores in the perception is that the evidence does not add much value to the product. If it does not add much value, not going to get much funding.Like most commercial efforts are often testing funded & count on warm bodies in lieu of training testers, mediocre check results are the norm, & so in recent years have seen increasingly companies dissolved the group Complete Program Testing.
The Lack of Trained Testers:
One of my first mentors when I started testing the application systems had been a tester in a boom-power  in the industry for plenty of years. They explained how, a nice analyst may be promoted to programmer, after four years of code review and write design specifications, then after four years in development, better programmers could aspire to a promotion the method check group. The first one years in the method check group devoted to learning to check the method.This situation still exists in some stores critical security, but not the norm in the shops of commercial application at all. The simple fact is that few testers or developers have been trained in formal methods, the testing techniques. Dorothy Graham, a noted author in the field of inspection testing and certification tester, estimated in late 1990 that only 10 percent of the testers and developers have had some training in check techniques. The results of the survey I mentioned earlier support of this assertion.Where application testers get their training? In the U.S., application testers are homegrown, mostly. Most of the available check training in North The united states comes through public and private seminars.In Europe, a higher percentage of students who attend seminars have proof of science or engineering careers assistants United States, but, again, most of the check application training is done in public workshops and private. Few indicators are in use, even among better educated testers.Academy is largely uninvolved with the actual business of commercial application production. No Application Testing is the only item that is not on the curriculum. Wireless communications, digital video editing and multimedia development are other omissions. University professors are busy well-established education issues and exploring future technologies. Few institutions cover the ground that serves the current needs of industry, such as training the next generation of professional testers.Few universities offer classes in application testing. Even less need application testing classes as part of the curriculum in application engineering. Unfortunately, this sends the message to businesses and the development community that application testing is not worth it.In the 1990s, finding information on plenty of topics of evidence was difficult to do. Few college courses were available in application testing, and only a few lectures were devoted to the topic. Since the advent of the Net, this situation has changed dramatically. The Net has made it possible for testers to find a great deal of information on application testing easily. Entering "Software + testing" in your favorite Net search engine today, is likely to receive hundreds of thousands of games. But these improvements have not improved the general situation of application check bench or check work.Traditionally in the United States, check groups were fitted with computer science graduates looking for entry-level positions in programming. But since 1990, they have seen the number of testers with any type of lowering the level of science. People who are currently employed to perform the tests did not come from a tradition of experimental practice, science or engineering, because employers do not see the need to pay for these people to fill positions of the check. This trend is reinforced by the focus on market demands than product reliability. Formal training Testers While recognizing the need for these skills, few would be available.I do not think there is a simple answer to this situation. The situation is the result of several factors. An aide to the current situation is that in most companies, the evidence is not a respected profession, is a phase. Most testers are transient-they are moving though the evidence to reach otherwise. For example, it is common for non-technical staff or outside of computer scientists from the school to use a stint in the check group to save themselves in a job in operations or development. Therefore, do not stay in the tests.Funding for the poor that routinely check groups received today also contributes to a phase in lieu of a career. Not resources for education ( the time to go and take a class). Management should think about the questions "Why educate testers, if only they will proceed to other races?" and "Why spend money on a check work that is probably not nice?"Tests lacks the credibility that it one times had. So, as the level of knowledge of the testers were basically using the metrics and ad hoc methods, it reduces the quality of testing. The fact is, improving the actual quality commercial application are coming about because of the Net and international acceptance of standards. Let me explain.
Standards Reduce the Amount of Testing Required:
Quality improvements in the 1990s have been driven by the standardization, testing or quality assurance.I mentioned how the Web enables software vendors to reduce support costs and get the bug fixes to users quickly and efficiently, instead of spending more to eliminate errors in the first place. Another type of improvement that has made the tests to be less important is the rapid adoption of standards in our larger systems.When I wrote my first article on the integration of the system in 1989, describes the system integration and building a rock wall with his bare hands from a combination of irregularly shaped stones match, wire and clay. The final product required by operators standing on the data center, 24 / 7, ready to slip a finger or a wrench in all the holes that appear.Each vendor had its own proprietary thing: the link library, the transport protocol, data structures, database language, whatever. There were no rules for how various systems will interoperate. In fact, I'm not sure the term interoperability existed in the 1990s. For example, when we create online banking for Prodigy, we wanted our IBM system to "talk" with the tandem in the bank. We had to invent our own headers and write our own black boxes to translate messages from IBM to Tandem and vice versa. All code was new and rightly so untrustworthy. It had to be tested without mercy.System modules were written in languages specific to the machine, each machine had its own operating system. Modems and routers manufacturer had their own specific ways of doing things. A message can be broken down and rebuilt a dozen times between the application and the client was built on the other side of the receiving modem. Testing a system requires that each component of the test with the full knowledge that something as simple as a text string, can be treated differently by each successive element of the network.Application integration in the networks of the day requires proof of the majors. During my first two years as a systems integrator, my best friend and was my only online tracking tool. Actually I got to the point where I could read the message headers binary, since it was found with the modem.We have come a long way in the intervening years. I'm not saying that all manufacturers have suddenly agreed to give up their internal protocols of property, structures and ways of doing things that do not. But finally, it all runs into the sea, or in our case, the Internet and the Internet are based on standards: IP, HTML, XML, etc. This means that sooner or later, everyone has to make your property "thing" to a standards-based "thing" so they can do business on the Web. (See the box on the standards that have improved the software and systems).Due to standardization, many of the tasks of the most techniques are no longer necessary, as I and my line monitor. This has also contributed to the management of contracts with less technical testers and more than entry-level non-technical testers. The fast-paced growth RAD / agile development methods that do not produce a specification of the tester can test countermeasures has also eliminated many test tasks.Obviously, there is great scope for improvement in the software testing environment. The evidence is often insufficient and frequently nonexistent. However, software testing can take place value, including the restrictions (and the apparent chaos) of the current market and stress testing can and should add value and quality product. Our next chapter examines the same issue.
Some of the Standards That Have Improved Software and Systems:
Several standards are in use today that e-business support interoperability. They are used to permit interoperability of information flows to a form that can be processed by another element in the process, among the various business services, applications and legacy systems. Open buying on the Web (OBI), cXML, and XML / EDI are a quantity of the most popular business to business (B2B) standards in use today. BizTalk, another offering standardized, is a framework of interoperability specifications. BizTalk applications support information flows and work flows between companies, allowing you to generate rules that govern how a method flows are translated, stored and otherwise manipulated before being sent to the next component in the flow.With the adoption of XML, it is now possible to host a Web service on an intranet or Web. A web service is basically an application that lives on the net and is available to any customer who can engage with it. Represents a "standard" version of an application that can be located, hired and used (Microsoft calls this "consuming" the net service) dynamically across the Web.Today, raising a electronic commerce application does not need a line monitor. The law does not need exhaustive testing was necessary before the Web. Applications have increased reliability from the beginning, because they are based on standards. Generally, the availability of a Web-based process is measured in 9s, with availability of 99.999 percent that is standard for a commercial process. That translates in to less downtime of 8 hours each year.In the near future they will find that they do not know where our information comes from our applications automatically and transparently accomplish and consultation Universal Description Discovery and Integration (UDDI) server anywhere on the planet to locate and engage with the Web organized Web services to do X, Y and Z as part of the application.DEVELOPMENT OF TOOLS TO SUPPORT STANDARDS ALSO.What they can do, how they interact with, and degree of reliability of our systems has improved greatly as a result of our adoption of the Web and its standards.Always use the name of the company approved for a specific function.Our development tools have greatly improved as well. For example, the. NET API development, Visual Studio. NET, can be configured to enforce the standard design and coding policies and developers through the use of templates. These templates can be customized at the company. They can impose a structure important in the development method, limit what developers can do, and demand that they do sure things, such as:
-Always use some form of gathering some information.
-Always use a sure information structure to maintain a sure type of information.
-If this model applies to a business method, which eliminates all sorts of errors in the done product.
-Submit a program module only after every action necessary has been done on it, such as providing all the information on tools and help messages.
Installation is a matter of copying the compiled files in a directory and invoking the executable. Programs need to be registered in the process that are walking. DeNET Framework contains an administrator of the implementation of standardized applications which controls just-in-time (JIT) and load applications in to managed memory. A memory manager ensures that programs are walking on your own space, and only in their own space.De NET Framework relies on a unified set of libraries that are used by all languages. The result of these features is that all programs are using the same set of link libraries, regardless of what language they are written accordingly thumb, if a library method in a module is tested, it can be assumed to behave in the same manner when used by any other module. A string, for example, are always treated in the same way that in lieu of each different language compiler that comes with its own set of link libraries with the process, with its own different errors.Developers can write in the language that fits the job at hand and abilities. The final product will perform the same regardless of which language was written in, because all languages are compiled in to a binary standard that uses the library routines UM and runs in its own protected memory space.This architecture is similar to walking Prodigy in 1987 using the IBM Transaction Processing Facility (TPF) operating process and the Prodigy's purpose-oriented language and common code libraries. Then worked reliable and probably will be a reliable now.