Don't Discard Test-driven Development in the Cloud?

By Arin [email protected]

Writing software for the cloud can be very different than writing software that runs on a single server. It can make test-driven development (TDD) more complicated, but it is still well worth doing. For the purposes of this article, I'll consider two types of software development in the cloud: cloud hosting and distributed computing. 

In cloud hosting, you are still writing the same type of software that you have always written. A simple example is a website developed in PHP, Java, Ruby on Rails, or .NET. You are not developing anything out of the ordinary, and the only impact cloud computing makes on your architecture is that it is easier for you to scale the web UI of your system as traffic grows.

For cloud-hosting scenarios, nothing has changed with regards to TDD. The typical xUnit frameworks will provide all that you need to write solid software using good XP practices.

Distributed computing is different. For the purposes of this article, I will define it as software that is designed to scale horizontally across many servers in order to improve some combination of reliability or speed or simply to spread the computational requirements of complex algorithms across many servers.

The use of clouds for distributed computing is more complicated and less common than the more straightforward cloud hosting scenario. However, more teams are being called on to develop these types of applications, and there are many open source projects that are making it easier to tap into the more advanced powers of cloud computing. 

Read more at:

Hypothesis Testing?

Very Helpful Article by Jesse Farmer:

Say I hand you a coin. How would you tell if it’s fair? If you flipped it 100 times and it came up heads 51 times, what would you say? What if it came up heads 5 times, instead?In the first case you’d be inclined to say the coin was fair and in the second case you’d be inclined to say it was biased towards tails. How certain are you? Or, even more specifically, how likely is it actually that the coin is fair in each case?

Read more at:

A Pragmatic Strategy for NOT Testing in the Dark?

© 1999 Johanna Rothman and Brian Lawrence. Originally published in Software Testing and Quality Engineering, Mar./April 1999 Issue.
A project manager strides purposefully into your office. "JB, this disk has the latest and greatest release of our software. Please test it." You say "OK, OK. What does it do?" The manager stops in his tracks and says "Uh, the usual stuff..."
Sound familiar? We've run into this situation as employees and as consultants. We've seen testers take the disk, stick it in the drive, and just start testing away.
That's testing in the dark. We think there are approaches that are more productive. When we test or manage testers, we plan the testing tasks to know what value we can get from the testing part of the project.
Let's try turning on the lights!
Even for a short (2-week) testing project, we've used this strategy. Consider this approach:
  • Discover the product's requirements, to know what testing needs to be done;
  • Define what quality means to the project, to know how much time and effort we can apply to testing;
  • Define a test plan, including release criteria, to check out different people's understanding of what's important about the product, and to know when we're ready to ship.

Discover the Requirements

The first part of your planning is to play detective. Your product will have a variety of requirements over its lifetime. Some will be more important sooner, others, later. You have to discover this project's requirements.

Read more at:

Metrics for Software Testing: Managing with Facts: Part 2: Process Metrics ?

Very Helpful Article Written by Rex Black:

In the previous article in this series, I offered a number of general observations about metrics, illustrated with examples. We talked about the use of metrics to manage testing and quality with facts. We covered the proper development of metrics, top-down (objective-based) not bottom-up (tools-based). We looked at how to recognize a good set of metrics.
In the next three articles in the series, we’ll look at specific types of metrics. In this article, we will take up process metrics. Process metrics can help us understand the quality capability of the software engineering process as well as the testing capability of the software testing process. Understanding these capabilities is a pre-requisite to rational, fact-driven process improvement decisions. In this article, you’ll learn how to develop and understand good process metrics.

Read More at :

Checklist for Windows Compliance Testing?

A very helpful checklist for testers by Ray Claridge Product Manager at IPC Media

For Each Application

1. Start the application by double clicking on its icon. The loading message should show the application name, version number, and a bigger pictorial representation of the icon.

2. The main window of the application should have the same caption as the caption of the icon in Program Manager.

3. Closing the application should result in an "Are you Sure" message box.

4. Attempt to start application twice. This should not be allowed - you should be returned to main Window.

5. Try to start the application twice as it is loading.

6. On each window, if the application is busy, then the hour glass should be displayed. If there is no hour glass (e.g. alpha access enquiries) then some enquiry in progress message should be displayed.

7. The window caption for every application should have the name of the application and the window name - especially the error messages. These should be checked for spelling, English and clarity, especially on the top of the screen. Check if the title of the window does make sense.

Software QA Series - Principles of Quality - Free Webinar?

This foundation WEBinar is designed for learning and understanding IT quality concepts. It provides an excellent overview of the entire IT quality professional area. It further provides a macro introduction of the quality assurance area by introducing and reviewing the principles expounded by leading quality experts.

This WEBinar series also addresses the important aspects to consider for those organizations that desire to properly organize their quality initiative for improved productivity and organizational integrity.

The participant will be introduced to the important quality principles, concepts, responsibilities and vocabulary. Here is what we will cover:

Understanding who is responsible for Quality
Discussion of examples of quality initiatives that work
Description of roadblocks on the road to quality
Description of Industry Models
Find out what it takes to be a successful software QA person and how quality can add significant value to software development.

Best Practices in Performance Testing to Ensure Success?

On September 28 at Noon EDT, Neotys invites you to a webinar: "Best Practices in Performance Testing to Ensure Success".
In this live webinar with leading retailer, The Bon-Ton Stores, you'll learn how to optimize the performance of your web applications while improving your responsiveness to the business with ease and without any special skills. Dan Gerard, Divisional Vp of Technical & Web Services and Will Esclusa, Manager Web Services & Technologies at The Bon-Ton Stores, will join me Rebecca Clinard, Technology Strategist at Neotys to discuss:
  • Meeting the challenge of establishing your own in-house performance testing
  • How you can better meet the urgent and changing needs of the business
  • Overcoming the challenges of load testing a complex Web 2.0 eCommerce site
  • Achieving the "10-minute Test Script"
  • The right way to handle the squeeze of tight timeframes
  • How to improve test productivity and efficiency for resource-constrained technology teams
The presentation will be followed by an audience Q&A.
Best practices Webinar - Wednesday September 28, Noon EDT (9am pDT)
Register for "Best Practices in Performance Testing to Ensure Success" today.

Register here -

Top Ten Risks When Leading an Offshore Test Team (Part 2)?

By Michael Hackett, Senior Vice President, LogiGear Corporation

In part 1 of this article, we explored the first five of the top ten risks, including:

1. Offshore work can be difficult to measure or quantify, leading to lack of confidence in the offshore effort
2. Lack of visibility into day-to-day work
3. Lack of a competent lead/point-of-contact
4. Lack of contingency plans for downtime
5. Offshore teams lose access to onshore servers and applications
6. The second five risks are based on the communication and cultural problems that exist between distributed teams, as well as the business climate in popular offshoring locations.

Key Principles of Test Design?

Test design is the single biggest contributor to success in software testing. Not only can good test design result in good coverage, it is also a major contributor to efficiency. The principle of test design should be "lean and mean." The tests should be of a manageable size and at the same time complete and aggressive enough to find bugs before a system or system update is released.
Test design is also a major factor for success in test automation. This is not that intuitive. Like many others, I initially also thought that successful automation is an issue of good programming or even "buying the right tool." Finding that test design is the main driving force for automation success is something that I had to learn over the years-often the hard way.
What I have found is that there are three main goals that need to be achieved in test design. I like to characterize them as the "Three Holy Grails of Test Design" - a metaphor based on the stories of King Arthur and the Round Table as the three goals as the three goals are difficult to reach mimicking thestruggle King Arthur’s knights experienced in search of the Holy Grail. This article will introduce the three "grails" to look for in test design. In subsequent articles of this series I go into more detail about each of the goals.
Read More at -

Effective Management of Test Automation Failures

By Hung Q. Nguyen, CEO, President, LogiGear Corporation
In recent years, much attention has been paid to setting up test automation frameworks which are effective, easy to maintain, and allow the whole testing team to contribute to the testing effort. In doing so, we often leave out one of the most critical considerations of test automation: What do we do when the test automation doesn't work correctly?
Testing teams need to develop a practical solution for determining who's accountable for analyzing test automation failures, and ensure that the right processes and skills exist to effectively do the analysis.  There are three primary reasons why your test automation may not work correctly:
  1. There is an error in the automated test itself
  2. The application under test (AUT) has changed
  3. The automation has uncovered a bug in the AUT
The first step whenever a failed test occurs in test automation is to figure out what happened. So who should be doing this?
Too often in testing organizations, it's the case that as soon as a test engineer runs into a problem with the test automation, they simply tell the automation engineer "Hey, the test automation isn't working!" The job of analysis then falls to the automation engineer, who is already overburdened with implementing/maintaining new and existing test automation.
Read More at:

Getting Automated Testing Under Control

In an effort to counter test automation challenges, Hans Buwalda and Maartje Kasdorp cite test clusters, test lines and navigation as tools for teams to execute testing projects. With descriptive explanations and accompanied diagrams, the authors argue the importance of how test designs should be strictly separated from the automation of tests.
This article first appeared in STQE, November/December 1999.

Effective Management of Test Automation Failures

A Great post By Hung Q. Nguyen, CEO, President, LogiGear Corporation

In recent years, much attention has been paid to setting up test automation frameworks which are effective, easy to maintain, and allow the whole testing team to contribute to the testing effort. In doing so, we often leave out one of the most critical considerations of test automation: What do we do when the test automation doesn't work correctly?
Testing teams need to develop a practical solution for determining who's accountable for analyzing test automation failures, and ensure that the right processes and skills exist to effectively do the analysis.  There are three primary reasons why your test automation may not work correctly:

Global Software Test Automation - Book Review

Global Software Test Automation is the first book to offer software testing strategies and tactics for executives. Written by executives and endorsed by executives, it is also the first to offer a practical business case for effective test automation, as part of the innovative new approach to software testing: Global Test Automation — a proven solution, backed by case studies, that leverages both test automation and offshoring to meet your organization's quality goals.

Mobile Application Testing: Process, Tools and Techniques ?

The market for mobile applications increases every day and is becoming more and more demanding as technology grows. In a new study, Yankee Group predicts a $4.2 billion “Mobile App Gold Rush” by 2013 which includes:

- Estimated number of smartphone users: 160 million
- Estimated number of smartphone app downloads: 7 billion
- Estimated revenue from smartphone app downloads: $4.2 billion

At Organic, our goal is to stay on the cutting edge of emerging platforms by launching new and diverse applications. We have this goal in mind when developing mobile web applications. We utilize some of the same styles of programming used for the developing of web applications. We also follow the same testing methodology employed for web development testing when testing our mobile applications.

Read More at:

Agile Test Automation - Truth, Oxymoron or Lie?

It can be confusing for everyone in an agile team to understand when or what to test, when there isn't a test phase or any formal documented requirements. Whatever your agile methodology, projects require a change in the way QA and development work together. The use of technology and automation are much more difficult and finding a practical approach to testing is critical for successful agile projects.

George Wilson explores how testing in agile is different and gives pragmatic advice to ensure that application quality, within an agile environment, isn't compromised. Discussions on the techniques for quickly getting control of manual testing and progressing to automated testing in agile will leave you with fresh thinking to resolve or prevent any testing dysfunctions in your agile teams.

Download the presentation from here -

Watch recorded webinar -


TestMaker - Open Source Software Test Platform Now?

Here is what we have for you:
- A systematic, simple way to understand and implement effective tests of your application
- Test software to build tests and deploy to desktop, grid, and cloud environments
- A clear tutorial approach to the PushToTest methodology of building and repurposing tests to understand the correct functioning, performance, and scalability of your application
- An organized reference guide to the "best of the best" tips, techniques, patterns and antipatterns from PushToTest over the years, and how it all fits together
- Invitations to participate in weekly free live Webinars to learn how the test experts effectively use Open Source Testing to build scalable applications

Download the tool and Read more from here -