Do you Think automated testing tools make testing easier?

Choosing an automated test tool include testing thoroughly, trying more ways for testing which were earlier not feasible, efficiency, reducing tedious manual testing. Few things to keep in mind while choosing an automated testing tool are:

- Points at which current testing is time consuming.
- Points at which current testing is tedious.
- Problems that are again and again missing with the current testing.
- Testing procedures that are carried again and again.
- Testing procedures that should be carried but are not being carried.
- Identify points where testing is not sufficient.
- Identify test tracking and management processes that can be implemented.

The choices of automated testing tool can be narrowed based on the characteristics of the software application. Once the shortlisting of automated tools is done, trial is taken for the final selection. Ensure that the selected testing tool is appropriate and the capabilities and limitations are well understood.

QTP Codes - Sending Keyboard Input to an Application

The example below uses the PressKey method to send keyboard input to an application.

'An example that presses a key using DeviceReplay.
       Set obj = CreateObject("Mercury.DeviceReplay")
       obj.PressKey 63
The PressKey method uses the ASCII value for the key.
63 is the ASCII value for F5.
ASCII values for other keys:
F1 - 59
F2 - 60
F3 - 61
F4 - 62
F5 - 63
F6 - 64
F7 - 65
F8 - 66
F9 - 67
F10 - 68
F11 - 87
F12 - 88

InputBox function in QTP

The InputBox function displays a dialog box containing a label. The data you expect from the users, an OK button

InputBox(prompt,title,default,xpos, ypos, helpfile, context)
Dim MyInput
MyInput = InputBox("Window Description", "Window Title - InputBox Function ", " Enter Ur Name", 50, 60, "Enter String Value", 20)
MyInput = InputBox("Enter Ur Name")
MsgBox MyInput

QTP VBScript syntax rules and guidelines

Source - QuickTest Professional Guide (by HP)

When working in the Expert View or in a function library, you should consider the following general VBScript syntax rules and guidelines: 

Case-sensitivity. By default, VBScript is not case sensitive and does not differentiate between upper-case and lower-case spelling of words, for example, in variables, object and operation names, or constants.
For example, the two statements below are identical in VBScript:

HP Loadrunner Best Practices - tips and tricks for configuration, scripting and execution

Abstract: This guide provides tips and tricks for HP LoadRunner software configuration, scripting, and execution. It is a conglomerate of lessons learned by HP LoadRunner power user Opral Wisham, including unique code as well as code collected from other testers. This guide is intended to help testers just learning to use HP LoadRunner, as well as to provide new best practices for those who have used HP LoadRunner for many years.

Implementing a successful performance testing program

Before you release any new application into production, you must perform extensive capacity and performance validation (CPV) testing. This guide is intended to help new users and seasoned professionals learn new ways to design and implement successful load testing initiatives using HP LoadRunner.

Click Here to Download the Whitepaper below

Objectives of Software Performance Testing

Objectives of Software Performance Testing - An article for beginners
1. Application Response Time: How long does it take to complete a task?
2. Reliability: How Stable is the system under a heavy work load?

Loadrunner Workflow basic Process - An Automated Performance Testing tool

Below is the Loadrunner Workflow - An article for beginners

Plan Load Tests:
1. Identify Business Critical Scenarios. Scenario means a manual work flow. Ex: Login - Open an Account -  Logout.
2. Estimate User Load. Performance Testing requirements will give an idea of users load or the number of users using the product. This will determine the load to be used against the product in testing.

Technology focussed test automation - pitfall

One of my favorite article:

Ever been in a situation where your test automation project was assigned to someone who was most interested in technology and coding and wanted to get away from the "routine" of testing ? Nothing wrong in being technically inclined and getting bored occasionally with testing! (Read more here on dealing with boredom in testing) 

However, what normally happens is that an engineer or a set of engineers who seem to demonstrate the most propensity to pick up a new tool / technology and run with it while wanting to get away from the regular testing tasks, are handed over the reins of test automation. Oftentimes what is observed is that the output of such an automation effort tends to be less than desirable from a testing perspective. What do i mean ? How can we have poor automation when employing our "star" technical resources ? Note the point that i am making - the probability of ending up with poor automation is higher in such a scenario where the focus is mainly on technology or tools used in automating rather than trying to solve the testing problem well.  

Who would you assign to do test automation ? The answer to that question is a key determinant to test automation success or failure. Agreed, it is not the sole determinant. However, it does play a very significant role. A common situation that one may observe while embarking on test automation is an excessive focus on tools or technology used to automate tests. Now, how could this be a negative factor in automation ? Isn't it a good thing to be keenly focussed on the technology used in automating tests ? Yes and No. Yes, since it is important to identify the right set of tools and technologies to automate your tests. You would not want to embark on an automation exercise only to meet roadblocks as the tool proves incapable of meeting your specific requirements. Now that you have the necessary tools, will it pose a problem if you continue to be focussed on technology used to automate tests ? Focus on technology is not bad in itself unless that focus makes you lose sight of the bigger picture, which is the testing problem you are trying to solve using that technology.

Emotions and feelings in testing software

Software testers generally look at the requirements to figure out how the product must behave. Often these requirements cover the functional and some non-functional attributes including performance, security, some elements of usability, etc. Tests are developed with expected results that align with these product requirements. So far so good. There is a clear line from the written down requirements to the tests.

As testers, as you proceed with executing your tests, there may be instances where you feel irritated or frustrated with aspects of the software under test. You might feel a range of emotions as you test the software. At such times, what do you do ? Do you listen to your feelings or do you go with the script, merely looking for expected behavior as described in the test cases that you are executing. If the test produces the expected behavior, while you have experienced conflicting emotions during testing, what would you do ?  

Read complete article at:

Agile QA

Agile methodologies incorporated a number of ideas to address QA needs:

Test-driven development (TDD) process requires developers to write test cases before writing the actual code. When developer completes code, by definition it has to pass test cases.
Continuous integration (CI) involves integrating code from all developers early and often - typically many times a day. Combined with automated test cases it results in catching and resolving integration issues immediately.
Small releases reduce risk of running into major issues or surprises.
Frequent releases allow to respond to and address any issues which might be found on production rapidly.

TDD and CI make developers clearly responsible for writing and execution of test cases. With a good adherence to the process it is entirely possible to develop software without a dedicated QA team.

Read more:

Major areas to must focus on to avert problems with cloud computing.

State of Cloud 2011: Time for Process Maturation
Cloud computing clearly has legs. Our 2011 InformationWeek Analytics State of Cloud Computing Survey showed a 67%  increase in the number of organizations using these services, from 18% in February 2009 to 30% in October 2010; an additional 13% say they plan to use cloud services within 12 months. Initial cost savings and speed to market remain the biggest drivers.
But are we rushing in without rational process, even a clear understanding of problems that need to be addressed? In many cases, yes. There are ominous indicators, in our data and in our work with a range of companies, that organizations continue to underfund or ignore integration, management and monitoring, potentially setting themselves up for a fall. In this report, we’ll delve into six major areas that IT must focus on to avert problems: integration, security, connectivity, monitoring, continuity planning and long-term staffing.

We’ll also discuss a more, dare we say, existential threat. OK, so that’s overstating things. But we do see a disturbing unwillingness of IT teams to fully take ownership of the cloud as a core part of the enterprise technology fabric. Only 29% of organizations using or planning to use the cloud have evaluated its impact on their architectures. Just 20% implement monitoring that watches applications and throughput; 40% don’t have any monitoring program in place. Talk about blind trust.

Finally, how rigorous is your ROI analysis when scoping the cost of deploying a new service in the cloud vs. internally? Unless you go out five years and include our 11 critical areas of consideration, it could be more thorough. We’ll delve into  what CIOs must consider when deciding where any given IT function belongs and explain why standards don’t matter in
the cloud. (R1610111)

Don't Give Short Shrift To Software Testing

So your company has decided it wants to launch a vanilla hosted collaboration or social business software system--meaning you aren't planning to customize it. You've presented the business case, but your executive team denies your line item request for someone to test the system each time you upgrade it to a new version. The software vendor does its own QA before it rolls out an upgrade, right? You haven't done any customizations. What could go wrong? This is social software--everyone knows how to use Twitter and Facebook.

Everyone on your team needs to understand that this mentality is a recipe for disaster. All software has bugs. Not all bugs will be found prior to launch, regardless of how thorough the vendor QA testing is. And developers and QA people aren't using the software in your environment. Your organization undoubtedly has unique use cases, and you will be the one hearing your employees/community members scream if something they love or need breaks during an upgrade.

Read the Complete Article Here

Test Strategy Video Lecture/Tutorial - Part 2

This is the second part of Test Strategy Video Lecture/Tutorial.
[View Part 1 of the Tutorial]

Test Strategy Video Lecture/Tutorial - Part 1

In this lecture, Kaner talks about why testers test, what they are trying to learn, and how they can organize their work to support their mission.

Note - These are the most recent lectures by Dr. Cem Kaner
[Click Here to watch Part 2 pf the tutorial]

Web Testing Checklist


1.1.1 Check that the link takes you to the page it said it would.
1.1.2 Ensure to have no orphan pages (a page that has no links to it)
1.1.3 Check all of your links to other websites
1.1.4 Are all referenced web sites or email addresses hyperlinked?
1.1.5 If we have removed some of the pages from our own site, set up a custom 404 page that redirects your visitors to your home page (or a search page) when the user try to access a page that no longer exists.
1.1.6 Check all mailto links and whether it reaches properly

Agile Implementation Methodologies

One of by favorite article on Agile development and implementation by John Morrison 

Abraham Lincoln said, “If you call a tail a leg, how many legs does a dog have? Four.Because calling it a leg doesn’t make it a leg.”

In the same vein, calling a project team agile does not really make it agile. Lets admit it. The term "agile" is a buzzword that is cool to use. Sometimes (or should it be oftentimes) folks pick up some aspects of a methodology that suits them and then attempt to fit it into their existing process with even worser results than before. And then they wonder why the new methodology isn't working ! The same situation occurs with adoption of agile. In this entry we look at some of the agile principles/practices that are easy to selectively pick and choose.

a) An agile team is capable of releasing the software to customers at the end of each iteration (generally between 2-4 weeks duration). Yes, be able to ship it to the customers, all developed, integrated, tested, wrapped up and mailed. Customers are able to see a working software that has features being available in increments and understand the progress being made. Of course, customers can provide quick feedback to enable any course corrections as needed too. Decisions can be made on whether additional features are to be added, existing functionality to be changed, or even to stop further development without having to wait for the complete release time frame. 

Click here to Read More

Prioritisation of the tests - Risk Based Testing

Never have enough time?

The overriding reason why we prioritise is that we never have enough time, and the prioritisation process helps us to decide what is in and out of scope.

First principle: to make sure the most important tests are included in test plans

So, the first principle of prioritisation must be that we make sure that the most important tests are included in the test plans. That’s pretty obvious.

Second principle: to make sure the most important tests are executed

The second principle is however, that we must make sure that the most important tests are run. If, when the test execution phase starts and it turns out that we do run out of time before the test plan is complete, we want to make sure that, if we do get squeezed, the most important tests, at least, have been run. So, we must ensure that the most important tests are scheduled early to ensure that they do get run.

If tests reveal major problems, better find them early, to maximise time available to correct problems.

There is also a most important benefit of running the most important tests first. If the most important tests reveal problems early on, you have the maximum amount of time to fix them and recover the project.