STAREAST 2012 - Software Testing Conference - STAREAST Tutorials - Register Early and Save!


STAREAST Tutorials - Register Early and Save!
Offered in half- and full-day formats, our 31 in-depth tutorial sessions allow you to drill down into the areas of interest that matter most to you. Come see why STAREAST conference attendees rate the tutorials as one of the most popular features of the entire conference.

ISTQB's Advanced Technical Test Analyst - A Technically Enlightened Way to Test Systems

Advanced Level Technical Test Analysts should be able to:

  • Structure the tasks defined in the test strategy in terms of technical requirements
  • Analyze the internal structure of the system in sufficient detail to meet the expected quality level
  • Evaluate the system in terms of technical quality attributes such as performance, security, etc.
  • Prepare and execute the adequate activities, and report on their progress
  • Conduct technical testing activities
  • Provide the necessary evidence to support evaluations
  • Implement the necessary tools and techniques to achieve the defined goals



Watch the below Recorded Webinar (by Sir Rex Black) on ISTQB's Advanced Technical Test Analyst - A Technically Enlightened Way to Test Systems


Performance Testing on Advanced Web Sites - On Demand Webinar

Web 2.0 has taken the IT world by storm—allowing organizations to move from simple, static methods of data delivery to being able to provide highly interactive, user-centric, online experiences. However, traditional performance testing solutions cannot keep pace with the growing complexity. HP TruClient technology, available in HP LoadRunner and HP Performance Center, is a browser-based virtual user generator (VUGen) that makes testing Web 2.0 and Ajax applications faster, easier, and more comprehensive. 

Onlife Health will discuss how they've utilized HP TruClient technology to develop their online portal, how it has helped them reduce their application test cycles, as well as identify and eliminate potential problems throughout the application lifecycle. Onlife Health will describe how they've achieved faster scripting time—by at least 50%—and share real world best practices.

During this web seminar, you'll learn how to:
  • Test critical end-user facing Web 2.0 and Ajax applications accurately and efficiently, even using beginners or less technical scripters
  • Reduce hardware and software costs by predicting application scalability and capacity, and lower the cost of defects by testing earlier
  • Pinpoint end-user, system-level, and code-level bottlenecks rapidly and with ease
Register for this event and receive a complimentary copy of HP's white paper, "HP TrueClient Technology: Accelerating the Path to Testing Modern Applications." This white paper compliments the valuable information you will gain from participating in this web seminar.

Click Here to Register.


Presenters
Ron Foster, Senior Systems Engineer, Onlife Health


With more than fourteen years of IT experience in various platforms and technologies, Ron has performance tested both large and small enterprise applications. He has more than six years experience with LoadRunner and scripting against multiple protocols, including .NET record/replay, Citrix, Web, AJAX C&S, and TruClient. Ron is currently a Senior Systems Engineer for Onlife Health in Nashville, Tenn.





Priya Kothari, Senior Product Manager, HP Software

Priya Kothari is the senior product manager for HP's Performance Validation solutions, including HP LoadRunner and HP Performance Center. Priya has been with HP (and previously Mercury) for more than eleven years. She was a founding member of "ActiveTune," Mercury's managed testing service in 2000, that was the foundation for HP's current Software-as-a-Service (SaaS). Priya has a long history with Performance Center, being one of the first people to use it since early inception days.






Is testing about checking against system requirements, or is it about exploring the software?

Some time back Mr. Bolton published an article on testing vs checking. On same concept, Elisabeth Hendrickson wrote one asrticle Two sides of testing. I was reading an article on testing "Two sides of testing". Where Elisabeth Hendrickson answers Is testing about checking against system requirements, or is it about exploring the software? In her great article, there we couple of points she mentioned which are very note-worthy.

Elisabeth proves that following claim in red is not always true.


Many years ago in a hallway conversation at a conference, a test manager and I were discussing our respective approaches to testing.
 "If they can't tell me what the software is supposed to do, I can't test it," Francine, the test manager, scowled. "So, I tell them that I won't start testing until they produce a detailed requirements document."
My eyebrows shot up through my hairline. At the time, I was working for a Silicon Valley software vendor that made consumer applications. If I waited for a comprehensive specification before I started testing, I'd be waiting forever. And, I'd be fired for failing to contribute meaningfully to the project. I said something to that effect, and Francine just shook her head at me. She couldn't imagine not having detailed specifications. I couldn't imagine holding the project hostage until I got documentation.

Few more useful quotes:

In the past, I was firmly on the side of using exploratory approaches. For most of my career, I worked for organizations that preferred lightweight documentation, so we didn't usually produce detailed test scripts. Even if those organizations had wanted binders full of step-by-step test cases, I agreed with James Bach that sticking to a testing script is like playing a game of  Twenty Questions where you have to ask all the questions in advance.
However, my perspective on this debate has shifted in the past several years as I started working with agile teams that value testing in all forms. I have come to realize that the old discussion of whether "good testing" involves predefined, detailed test scripts or exploratory testing is like pitting salt against pepper, glue against staples, or belts against suspenders.
It is a false dilemma and a pointless debate.
So Guys, what do you think? Is testing about checking against system requirements, or is it about exploring the software? Give your comments below.

Click here to Read the complete article "The two sides of testing".

Agile Test Automation - Training

Successful Automation in an Agile Environment
December 13–14, 2011

Don't miss out on this popular new course! Make plans to join SQE for the final Agile Test Automation course of 2011!

In this interactive tutorial, Janet Gregory describes how to use automation early and guide development, what tests should be automated, and how to work through ways to overcome common barriers to automation.

Are your automated tests effective and easy to maintain?
Janet will use examples to illustrate how to design automated tests for maximum effectiveness and ease of maintenance. Find out different approaches for evaluating and implementing automated test tools, shortening feedback cycles, creating realistic test data, and evaluating your automation efforts.

Do you ever question how to deliver good quality when you have to release so often?
By combining a collaborative team approach with appropriate tools and design approaches, over time you can not only automate your regression tests but also use automation to enhance exploratory testing.

Do you worry about testing lagging behind coding?
By the end of this session, you'll understand how to fit automation activities within each iteration so that testing "keeps up" with coding.

Here's what one recent attendee had to say about Agile Test Automation:
"Excellent content, excellent instructor, convenient format. Great combo!" William Krebs, Allscripts

About the Instructor:
The co-author of Agile Testing: A Practical Guide for Agile Testers and Teams, Janet Gregory specializes in helping teams build quality systems. Janet's greatest passion is promoting agile quality processes. As tester or coach she has helped introduce agile development practices into companies and has successfully transitioned several traditional test teams into the agile world. Janet's focus is working with business users and testers to understand their roles in agile projects and has partnered with developers on her agile teams to implement successful test automation solutions.

Gregory_Janet_Medium

Click Here to register and join for the training

Performance Testing on the Cloud

SoftSmith is conducting a "Cloud based Performance Testing Webinar" this month on

Date: 17-Nov-2011
Time: 4.00pm to 5.30pm IST
Webinar Link: https://www2.gotomeeting.com/register/314865266


Detailed Content:

  • How will you know the performance of a hosted SaaS application?
  • Measure performance and availability of cloud applications.
  • Load test from Amazon cloud.
  • Load test from real end user machines, from various cities, various countries.
  • Analyze performance measurements.



You can register in this webinar by just clicking the above gotomeeting link,this webinar is absolutely free of cost.

Smart Phone Testing – Are You "Smart" Enough?


Mobile phone usage has exploded over the last few years as the device transitions from its traditional role as a communications medium to becoming a multi-purpose personal gadget. This expansion, driven by a flurry of technological advancements across a variety of device models, complicates the product development and rollout process for device manufacturers and application developers.

The more daunting task now becomes application quality testing across operating systems, device platforms, and networks to ensure wide acceptance and proper usage. Non-functional testing, including usability, security, and adaptability, is as important as functional testing. Effective testing enables device makers and application developers to collect appropriate metrics that help improve product quality.

In this web seminar, Cognizant explores industry best practices on mobile testing and demonstrates effective ways of managing mobile application quality. From this web seminar, you’ll take away:

  • A clear understanding of different aspects of mobile application testing and effective execution of appropriate testing approaches
  • An automation approach to accelerate any mobile testing cycle 
  • How to establish a mobile testing lab 
  • New techniques to emulate, simulate, and handle multiple browsers, operating systems, platforms, networks, and languages

Presenter: Pradeep Kumar, Head of Mobile Testing Practice, Cognizant

Click on below link to See this On Demand Webinar - 
https://goo.gl/v1iEk

Visual Studio Quality Assurance and Testing Tools - Case Study?

26% reduced development-to-test cycle time
91% increase in defects discovered

Read this white paper from Pique Solutions to learn how you can get these types of improvements, and the business case to support adopting Microsoft Visual Studio quality assurance and testing tools.

Download this report from Pique Solutions today!

Requirements management Fundamentals?

Learn four fundamentals of requirements management.


Too often projects fail due to issues with requirements. Today, more than ever, it's important for everyone involved in a project to clearly understand the scope of what it is the team is building and why. In this whitepaper, we'll cover the significance of requirements management, as well as four fundamental concepts that are valuable for all stakeholders to understand:

  • Planning good requirements
  • Collaboration and getting buy-in
  • Traceability and change management
  • Quality assurance

Click Here to download the free whitepaperhttps://www.jamasoftware.com/contour/rm101-stickyminds.php

Performance Testing Across the Lifecycle?

Is performance the last thing you test? Most projects ignore performance testing until code is complete - when fixing bugs or rethinking architecture choices is most costly. In this webinar, Matt Heusser, leading author and consultant on Agile Testing, and Dan Bartow, SOASTA's Performance Engineering VP, will discuss the challenges and benefits of focusing on performance earlier, as part of a rapid development and deployment approach for today's websites and mobile applications. 

They'll explore how to:
  • Build comprehensive test plans that involve Development, Test and Operations
  • Create iterative performance tests and execute them across the development cycle
  • Integrate functional and performance testing in fast-paced environments
  • Integrate performance testing into continuous build frameworks
  • Report on "Performance Regression" and "Performance Coverage"
Matt will discuss how to best direct the planning process to cover performance in agile models, with tips and examples. Dan will illustrate how the always-agile SOASTA development team has run functional and performance tests using the CloudTest platform as part of a daily build process for years. You'll see how CloudTest Lite can be integrated with continuous build frameworks like Hudson and Jenkins. Join these seasoned experts for an educational session that will enable you include performance testing early and often, no matter how fast your teams move. 

When Testers Abuse Authority: Q&A with Michael Bolton‏?

When Michael Bolton talks, testers listen. In our latest Testing the Limits interview, we shoot some questions back and forth with the popular author, speaker and consultant. Here's a sneak-peek: 

"I urge testers: You want to manage a project? Become a project manager. I urge quality assurance people: You want to assure quality? Make sure you have real, final authority over the product and the people who produce it. That is, become a manager. You're not a gatekeeper of quality; you're a speed bump on the road to quality."

"I've met testers who believe that it's their prerogative to tell programmers what to do or how to do it. I recommend that such testers reflect on how they feel when they're told what to do by people who've never done testing work."

A software expert's heuristic for regression testing?

By Karen N. Johnson:

Regression testing can be a bundle of work. Regression testing is testing designed to revisit existing aspects of an application or product to ensure the application is still working after changes have been made within a product or new features have been added. By definition, regression testing can be expansive because we may want to ensure nearly every aspect of a product is retested. Recognizing that regression tests are typically previously-created tests means that the labor of regression testing is not in test creation as much as test execution time. Planning what to regression test is the first challenge. So, how do you choose what to regression test? 

Regression testing can be a bundle of work. Regression testing is testing designed to revisit existing aspects of an application or product to ensure the application is still working after changes have been made within a product or new features have been added. By definition, regression testing can be expansive because we may want to ensure nearly every aspect of a product is retested. Recognizing that regression tests are typically previously-created tests means that the labor of regression testing is not in test creation as much as test execution time. Planning what to regression test is the first challenge. So, how do you choose what to regression test?
I devised a heuristic to plan regression testing, it's called: RCRCRC. It stands for:
  • Recent
  • Core
  • Risky
  • Configuration sensitive
  • Repaired
  • Chronic
If you haven't worked with heuristics before, the term can sound intimidating. A heuristic is a rule of thumb or a shortcut that helps us solve problems and make judgments. A heuristic is not a perfect method. The purpose of this heuristic is to help you think through various aspects of the application you're testing and think about the product in different


Read more at: https://searchsoftwarequality.techtarget.com/tip/A-software-experts-heuristic-for-regression-testing

Tips for Better User Acceptance Testing?

By Karen N. Johnson

The theory of user acceptance testing (UAT) is straightforward: User acceptance testing is conducted by users of the product. Users test a product to determine whether the product meets their needs, expectations, and/or requirements. But the distance between the theory of UAT and the reality of what takes place in UAT can be a mighty big gap.
The user acceptance test cycle can be one of the vaguest and most poorly planned segments of the whole product development lifecycle. Confusion may abound about exactly what UAT is and who is responsible for running it. One of the larger pain points of UAT is how late in the cycle this testing takes place. Typically UAT is one of the last efforts before product launch. The late timeframe of the testing adds to frustration, leaving some users and product team members wondering, "What's the point of UAT?"

Read more at:
https://www.informit.com/articles/article.aspx?p=1431821


Top 10 Qualities of a Project Manager?


By Timothy R. Barry
What qualities are most important for a project leader to be effective? Over the past few years, the people at ESI International, world leaders in Project Management Training, have looked in to what makes an effective project leader. With the unique opportunity to ask some of the most talented project leaders in the world on their Project Leadership courses ESI have managed to collect a running tally on their responses. Below are the top 10 in rank order according to frequency listed.


Inspires a Shared Vision
An effective project leader is often described as having a vision of where to go and the ability to articulate it. Visionaries thrive on change and being able to draw new boundaries. It was once said that a leader is someone who "lifts us up, gives us a reason for being and gives the vision and spirit to change." Visionary leaders enable people to feel they have a real stake in the project. They empower people to experience the vision on their own. According to Bennis "They offer people opportunities to create their own vision, to explore what the vision will mean to their jobs and lives, and to envision their future as part of the vision for the organisation." (Bennis, 1997)


Read More at:

Don't Discard Test-driven Development in the Cloud?

By Arin [email protected]

Writing software for the cloud can be very different than writing software that runs on a single server. It can make test-driven development (TDD) more complicated, but it is still well worth doing. For the purposes of this article, I'll consider two types of software development in the cloud: cloud hosting and distributed computing. 


In cloud hosting, you are still writing the same type of software that you have always written. A simple example is a website developed in PHP, Java, Ruby on Rails, or .NET. You are not developing anything out of the ordinary, and the only impact cloud computing makes on your architecture is that it is easier for you to scale the web UI of your system as traffic grows.

For cloud-hosting scenarios, nothing has changed with regards to TDD. The typical xUnit frameworks will provide all that you need to write solid software using good XP practices.

Distributed computing is different. For the purposes of this article, I will define it as software that is designed to scale horizontally across many servers in order to improve some combination of reliability or speed or simply to spread the computational requirements of complex algorithms across many servers.

The use of clouds for distributed computing is more complicated and less common than the more straightforward cloud hosting scenario. However, more teams are being called on to develop these types of applications, and there are many open source projects that are making it easier to tap into the more advanced powers of cloud computing. 



Read more at:
https://www.stickyminds.com/testandevaluation.asp?Function=FEATUREDETAIL&ObjectId=17177&ObjectType=COL

Hypothesis Testing?

Very Helpful Article by Jesse Farmer:


Say I hand you a coin. How would you tell if it’s fair? If you flipped it 100 times and it came up heads 51 times, what would you say? What if it came up heads 5 times, instead?In the first case you’d be inclined to say the coin was fair and in the second case you’d be inclined to say it was biased towards tails. How certain are you? Or, even more specifically, how likely is it actually that the coin is fair in each case?

Read more at:
https://20bits.com/articles/hypothesis-testing-the-basics/

A Pragmatic Strategy for NOT Testing in the Dark?


© 1999 Johanna Rothman and Brian Lawrence. Originally published in Software Testing and Quality Engineering, Mar./April 1999 Issue.
A project manager strides purposefully into your office. "JB, this disk has the latest and greatest release of our software. Please test it." You say "OK, OK. What does it do?" The manager stops in his tracks and says "Uh, the usual stuff..."
Sound familiar? We've run into this situation as employees and as consultants. We've seen testers take the disk, stick it in the drive, and just start testing away.
That's testing in the dark. We think there are approaches that are more productive. When we test or manage testers, we plan the testing tasks to know what value we can get from the testing part of the project.
Let's try turning on the lights!
Even for a short (2-week) testing project, we've used this strategy. Consider this approach:
  • Discover the product's requirements, to know what testing needs to be done;
  • Define what quality means to the project, to know how much time and effort we can apply to testing;
  • Define a test plan, including release criteria, to check out different people's understanding of what's important about the product, and to know when we're ready to ship.

Discover the Requirements

The first part of your planning is to play detective. Your product will have a variety of requirements over its lifetime. Some will be more important sooner, others, later. You have to discover this project's requirements.

Read more at: https://www.jrothman.com/Papers/Pragmaticstrategies.html

Metrics for Software Testing: Managing with Facts: Part 2: Process Metrics ?


Very Helpful Article Written by Rex Black:

In the previous article in this series, I offered a number of general observations about metrics, illustrated with examples. We talked about the use of metrics to manage testing and quality with facts. We covered the proper development of metrics, top-down (objective-based) not bottom-up (tools-based). We looked at how to recognize a good set of metrics.
In the next three articles in the series, we’ll look at specific types of metrics. In this article, we will take up process metrics. Process metrics can help us understand the quality capability of the software engineering process as well as the testing capability of the software testing process. Understanding these capabilities is a pre-requisite to rational, fact-driven process improvement decisions. In this article, you’ll learn how to develop and understand good process metrics.

Read More at :https://www.rbcs-us.com/images/documents/Metrics-Article2-0711.pdf






Checklist for Windows Compliance Testing?


A very helpful checklist for testers by Ray Claridge Product Manager at IPC Media

For Each Application

1. Start the application by double clicking on its icon. The loading message should show the application name, version number, and a bigger pictorial representation of the icon.

2. The main window of the application should have the same caption as the caption of the icon in Program Manager.

3. Closing the application should result in an "Are you Sure" message box.

4. Attempt to start application twice. This should not be allowed - you should be returned to main Window.

5. Try to start the application twice as it is loading.

6. On each window, if the application is busy, then the hour glass should be displayed. If there is no hour glass (e.g. alpha access enquiries) then some enquiry in progress message should be displayed.

7. The window caption for every application should have the name of the application and the window name - especially the error messages. These should be checked for spelling, English and clarity, especially on the top of the screen. Check if the title of the window does make sense.


Software QA Series - Principles of Quality - Free Webinar?


This foundation WEBinar is designed for learning and understanding IT quality concepts. It provides an excellent overview of the entire IT quality professional area. It further provides a macro introduction of the quality assurance area by introducing and reviewing the principles expounded by leading quality experts.

This WEBinar series also addresses the important aspects to consider for those organizations that desire to properly organize their quality initiative for improved productivity and organizational integrity.

The participant will be introduced to the important quality principles, concepts, responsibilities and vocabulary. Here is what we will cover:

Understanding who is responsible for Quality
Discussion of examples of quality initiatives that work
Description of roadblocks on the road to quality
Description of Industry Models
Find out what it takes to be a successful software QA person and how quality can add significant value to software development.


Best Practices in Performance Testing to Ensure Success?


On September 28 at Noon EDT, Neotys invites you to a webinar: "Best Practices in Performance Testing to Ensure Success".
In this live webinar with leading retailer, The Bon-Ton Stores, you'll learn how to optimize the performance of your web applications while improving your responsiveness to the business with ease and without any special skills. Dan Gerard, Divisional Vp of Technical & Web Services and Will Esclusa, Manager Web Services & Technologies at The Bon-Ton Stores, will join me Rebecca Clinard, Technology Strategist at Neotys to discuss:
  • Meeting the challenge of establishing your own in-house performance testing
  • How you can better meet the urgent and changing needs of the business
  • Overcoming the challenges of load testing a complex Web 2.0 eCommerce site
  • Achieving the "10-minute Test Script"
  • The right way to handle the squeeze of tight timeframes
  • How to improve test productivity and efficiency for resource-constrained technology teams
The presentation will be followed by an audience Q&A.
Best practices Webinar - Wednesday September 28, Noon EDT (9am pDT)
Register for "Best Practices in Performance Testing to Ensure Success" today.


Register here - https://www.sdtimes.com/content/resources.aspx?ShowOnlyResourceID=524&RefID=Neotys9.28-Neotys1

Top Ten Risks When Leading an Offshore Test Team (Part 2)?


By Michael Hackett, Senior Vice President, LogiGear Corporation

In part 1 of this article, we explored the first five of the top ten risks, including:

1. Offshore work can be difficult to measure or quantify, leading to lack of confidence in the offshore effort
2. Lack of visibility into day-to-day work
3. Lack of a competent lead/point-of-contact
4. Lack of contingency plans for downtime
5. Offshore teams lose access to onshore servers and applications
6. The second five risks are based on the communication and cultural problems that exist between distributed teams, as well as the business climate in popular offshoring locations.


Key Principles of Test Design?


Test design is the single biggest contributor to success in software testing. Not only can good test design result in good coverage, it is also a major contributor to efficiency. The principle of test design should be "lean and mean." The tests should be of a manageable size and at the same time complete and aggressive enough to find bugs before a system or system update is released.
Test design is also a major factor for success in test automation. This is not that intuitive. Like many others, I initially also thought that successful automation is an issue of good programming or even "buying the right tool." Finding that test design is the main driving force for automation success is something that I had to learn over the years-often the hard way.
What I have found is that there are three main goals that need to be achieved in test design. I like to characterize them as the "Three Holy Grails of Test Design" - a metaphor based on the stories of King Arthur and the Round Table as the three goals as the three goals are difficult to reach mimicking thestruggle King Arthur’s knights experienced in search of the Holy Grail. This article will introduce the three "grails" to look for in test design. In subsequent articles of this series I go into more detail about each of the goals.
Read More at - 
https://www.logigear.com/january-issue-2011/981-key-principles-of-test-design.html

Effective Management of Test Automation Failures


By Hung Q. Nguyen, CEO, President, LogiGear Corporation
In recent years, much attention has been paid to setting up test automation frameworks which are effective, easy to maintain, and allow the whole testing team to contribute to the testing effort. In doing so, we often leave out one of the most critical considerations of test automation: What do we do when the test automation doesn't work correctly?
Testing teams need to develop a practical solution for determining who's accountable for analyzing test automation failures, and ensure that the right processes and skills exist to effectively do the analysis.  There are three primary reasons why your test automation may not work correctly:
  1. There is an error in the automated test itself
  2. The application under test (AUT) has changed
  3. The automation has uncovered a bug in the AUT
The first step whenever a failed test occurs in test automation is to figure out what happened. So who should be doing this?
Too often in testing organizations, it's the case that as soon as a test engineer runs into a problem with the test automation, they simply tell the automation engineer "Hey, the test automation isn't working!" The job of analysis then falls to the automation engineer, who is already overburdened with implementing/maintaining new and existing test automation.
Read More at: https://goo.gl/Bu5cX

Getting Automated Testing Under Control


In an effort to counter test automation challenges, Hans Buwalda and Maartje Kasdorp cite test clusters, test lines and navigation as tools for teams to execute testing projects. With descriptive explanations and accompanied diagrams, the authors argue the importance of how test designs should be strictly separated from the automation of tests.
This article first appeared in STQE, November/December 1999.

Effective Management of Test Automation Failures


A Great post By Hung Q. Nguyen, CEO, President, LogiGear Corporation

In recent years, much attention has been paid to setting up test automation frameworks which are effective, easy to maintain, and allow the whole testing team to contribute to the testing effort. In doing so, we often leave out one of the most critical considerations of test automation: What do we do when the test automation doesn't work correctly?
Testing teams need to develop a practical solution for determining who's accountable for analyzing test automation failures, and ensure that the right processes and skills exist to effectively do the analysis.  There are three primary reasons why your test automation may not work correctly:

Global Software Test Automation - Book Review


Global Software Test Automation is the first book to offer software testing strategies and tactics for executives. Written by executives and endorsed by executives, it is also the first to offer a practical business case for effective test automation, as part of the innovative new approach to software testing: Global Test Automation — a proven solution, backed by case studies, that leverages both test automation and offshoring to meet your organization's quality goals.

Mobile Application Testing: Process, Tools and Techniques ?


The market for mobile applications increases every day and is becoming more and more demanding as technology grows. In a new study, Yankee Group predicts a $4.2 billion “Mobile App Gold Rush” by 2013 which includes:

- Estimated number of smartphone users: 160 million
- Estimated number of smartphone app downloads: 7 billion
- Estimated revenue from smartphone app downloads: $4.2 billion

At Organic, our goal is to stay on the cutting edge of emerging platforms by launching new and diverse applications. We have this goal in mind when developing mobile web applications. We utilize some of the same styles of programming used for the developing of web applications. We also follow the same testing methodology employed for web development testing when testing our mobile applications.

Read More at: https://www.logigear.com/july-issue-2011/1059-mobile-application-testing-process-tools-and-techniques.html

Agile Test Automation - Truth, Oxymoron or Lie?


It can be confusing for everyone in an agile team to understand when or what to test, when there isn't a test phase or any formal documented requirements. Whatever your agile methodology, projects require a change in the way QA and development work together. The use of technology and automation are much more difficult and finding a practical approach to testing is critical for successful agile projects.

George Wilson explores how testing in agile is different and gives pragmatic advice to ensure that application quality, within an agile environment, isn't compromised. Discussions on the techniques for quickly getting control of manual testing and progressing to automated testing in agile will leave you with fresh thinking to resolve or prevent any testing dysfunctions in your agile teams.


Download the presentation from here - https://www.origsoft.com/webinars/agile_testing/agile_test_automation.pdf

Watch recorded webinar - https://vzaar.com/videos/676465

Source: https://www.origsoft.com/webinars/agile_testing/

TestMaker - Open Source Software Test Platform Now?


Here is what we have for you:
- A systematic, simple way to understand and implement effective tests of your application
- Test software to build tests and deploy to desktop, grid, and cloud environments
- A clear tutorial approach to the PushToTest methodology of building and repurposing tests to understand the correct functioning, performance, and scalability of your application
- An organized reference guide to the "best of the best" tips, techniques, patterns and antipatterns from PushToTest over the years, and how it all fits together
- Invitations to participate in weekly free live Webinars to learn how the test experts effectively use Open Source Testing to build scalable applications

Download the tool and Read more from here - 

Why is software testing so critical?


The answer is simple. Software bugs and errors are so widespread and so detrimental that they cost the US economy an estimated 0.6 percent of the gross domestic product. This amount approximately translates into a whopping $100 billion annually. Of this amount, half the costs are borne by the users and the other half by the software developers and software vendors. We must remember that nearly every business and industry in the United States depends on development, marketing, and after-sales support of software services and products. A study conducted by the Department of Commerce's


National Institute of Standards and Technology (NIST) has assessed that more than a third of the costs can be eliminated by improved software investigating infrastructure consisting of a paraphernalia of testing tools for load testing, stress testing, and performance testing.

Read more at: https://goo.gl/MIwfZ

Resuire Software Testers - Performance Testing | Job Location Gurgaon ?


Qualification : B.Tech/B.E., Bachelor of Computer Applications (B.C.A.), Master of Computer Applications (M.C.A.), Master of Technology (M.Tech/ME)
Experience :	Min  (2) Year  Max  (5) Year
Job Description :	QA Engineers with 2-5 years of experience in performance analysis, performance monitoring, automated scripting for performance data logging preferred. Exposure to load management tools, hand-on skills in adapting open source tools will be an asset.
Functional Area : IT / Telecom - Software
Location : Gurgaon
Country : India


Apply-> [email protected] (with Subject as "Resume for Performance Tester"

Required Software Tester (Quality Engineer) in Gurgaon ?


Cvent is looking for a talented quality engineer to join our Technology team, which designs, develops and operates large-scale, Web-based applications. The Quality Engineer ensures the quality and integrity of the application. This position plays an integral role in application usability, providing product feedback at all stages of the development life cycle. This is an entry-level position with significant opportunity for career growth.
Star performers in this profile receive a unique opportunity to visit the Cvent Headquarters office in the US, for 3 months of further learning and career development.


Position Duties
· Write and execute test plans for the application
· Review and assist in the development of test plans being prepared by others
· Document defects and work with engineers to resolve issues
· Provide usability feedback to the product team
· Assist in applying application standards


Candidate Requirements:
· B. E.\B Tech\MCA  (Must)
· 0– 1 years of relevant experience
· Excellent problem solving and analytical skills
· Superior attention to detail
· Interest in technology and a hunger for learning
· Ability to work independently and as part of a team
· Knowledge of relational databases and Microsoft Office required
· Knowledge of SQL, Software Development Lifecycle, HTML, XML, and automated testing tools a plus

Email your resume at: [email protected]

Crowd Sourced Testing ?


(By Rajini Padmanaban, Director of Engagement, Global Testing Services)

Given the global distribution of software and how internet is bringing the world together, community based testing activities have been gaining a lot of momentum in the recent years. Such activities could be forum discussions, beta testing efforts, crowd sourced testing etc. Of specific interest in this blog is to see what crowd sourced testing is and when can this model be leveraged to yield success

In simple terms, crowd sourced testing is leveraging the community at large to test a given product. This is the community that spans people from diverse cultures, geographies, languages, walks of life who test the given software, putting the software to use under very realistic scenarios which a tester in the core test team may not be able to think of, given his / her limited bounds of operation.

Read More at: https://www.qainfotech.com/blog/2011/06/crowd-sourced-testing-is-it-really-for-you/

A Note on globalization testing?



The goal of globalization testing is to detect potential problems in application design that could inhibit globalization. It makes sure that the code can handle all international support without breaking functionality that would cause either data loss or display problems. Globalization testing checks proper functionality of the product with any of the culture/locale settings using every type of international input possible. 

Proper functionality of the product assumes both a stable component that works according to design specification, regardless of international environment settings or cultures/locales, and the correct representation of data.

The following must be part of your globalization-testing plan:
Decide the priority of each component

To make globalization testing more effective, assign a testing priority to all tested components. Components that should receive top priority:
Support text data in the ANSI (American National Standards Institute) format
Extensively handle strings (for example, components with many edit controls)
Use files for data storage or data exchange (e.g., Windows metafiles, security configuration tools, and Web-based tools)

Source: https://www.software-testing-india.info/globalization-testing.html

A note on USABILITY TESTING ?


conVisitors to your companies Website may have a wide range of Internet experience and, consequently, have different expectations which must be fulfilled to win them over. While experienced users look for implementation of industry norms, newcomers need guidance to surf through the unfamiliar Web environment. 

Failure to cater to such expectations is likely to result into lost sales, as visitors are unable to locate what they are looking for or unable to complete transactions. 

Usability testing starts by identifying specific demographic groups within the target audience, taking into account their age, profession, cultural background, level of Internet exposure and many other relevant factors. 

Goals of usability testing

Usability testing is a black-box testing technique. The aim is to observe people using the product to discover errors and areas of improvement. Usability testing generally involves measuring how well test subjects respond in four areas: efficiency, accuracy, recall, and emotional response. The results of the first test can be treated as a baseline or control measurement; all subsequent tests can then be compared to the baseline to indicate improvement.

1. Performance -- How much time, and how many steps, are required for people to complete basic tasks? (For example, find something to buy, create a new account, and order the item.)

2. Accuracy -- How many mistakes did people make? (And were they fatal or recoverable with the right information?)
3. Recall -- How much does the person remember afterwards or after periods of non-use?
4. Emotional response -- How does the person feel about the tasks completed? Is the person confident, stressed? Would the user recommend this system to a friend?

Emerging Trends in Security Testing?


(by APP Labs) Today the application security testing space is not what it used to be. There are several trends that are affecting the development and testing of next generation applications from a security perspective. The three towering facets that are rewriting the conventional path taken for security testing are – Cloud, Mobile and Rich Internet Application (RIA) platforms.

RIAs challenge traditional application security testing tools, which tend to focus on testing the web server side of the application. With RIA, the client side of the application logic has become equally important, if not more and has to be tested as well. This is bringing in new tides of challenges.

Cloud platforms will require application security testing tools to evolve to support the testing of applications built for specific cloud platforms, and built using cloud-specific languages and frameworks. The other disruption cloud platforms are driving demand testing to support XML-based APIs used to reach out and consume cloud-based services.

Read More - https://blog.applabs.com/index.php/2011/01/emerging-trends-in-security-testing/

SYSTEM AND USER ACCEPTANCE TESTING ?


System testing usually refers to the testing of a specific system in a controlled environment to ensure that it will perform as expected and as required. From a Systems Development perspective, the term System Testing refers to the testing performed by the development team (programmers and other technicians) to ensure that the system works module by module (unit testing) and also as a whole. 

System Testing should ensure that each function of the system works as expected and all errors (bugs) are detected and analysed. It should also ensure that interfaces for export and import routines will function as required. After meeting the criteria of the Test Plan, the software moves to the next phase of quality check and undergoes User Acceptance Testing.

User Acceptance Testing: UAT refers to the test procedures which lead to formal 'acceptance' of new or changed systems. User Acceptance Testing is a critical phase of any project and requires significant participation of 'End Users'. An Acceptance Test Plan is also developed detailing the means by which 'Acceptance' will be achieved. The final part of the UAT can also include a parallel run to compare the new system against the current one. 

The User Acceptance Test Plan will vary from system to system but, in general, the testing should be planned in order to provide realistic and adequate exposure. The testing can be based upon User Requirements Specifications to which the system should conform. However, problems will continue to arise and it is important to determine what will be the expected and required responses from various parties concerned including Users, Project Team, Vendors and possibly Consultants/Contractors. 

Don’t Measure All Software Defects Equally?


(By App Labs) Quality just cannot be build into a software/application right before it gets launched, it must be part of the software life cycle right from the requirements to the production phase. With the growing prominence on software quality, most of the enterprises are investing in advanced tools, processes, and people, and more so on testing and quality assurance. But, the growing need to develop and update applications faster to stay ahead of the competition and stringent project deadlines, constrict enterprises from accommodating enough time for testing.

While resolving all defects is quite important the kind of effort can be varied based on the priority of the defects. All defects may not have the same impact on the application, and hence smarter testing based on the defect severity is what is needed for enterprises to improve quality while meeting the time 

Read More at: https://blog.applabs.com/index.php/2011/03/dont-measure-all-software-defects-equally/

Web Testing with Automation anywhere?


Businesses and applications today are increasingly moving to web based systems. Time tracking systems, CRM, HR and payroll systems, financial software, materials management, order tracking systems and report generation, everything is web based.

Automation Anywhere can automate all web based processes without any programming; from simple, online form-filling to more complicated tasks like data transfer, web data extraction, image recognition or process automation.

SMART Automation Technology of "Automation Anywhere" offers over 180+ powerful actions for web automation. Automation Anywhere works with any website, even complex websites using java, javascript, AJAX, Flash or iFrames. Agent-less remote deployment allows automated task to be run over various machines on the network. Our advanced Web Recorder ensures accurate re-runs taking into account website changes.

Automation Anywhere offers 2 easy options to automate web tasks: Use our powerful Web Recorder or use the editor with Point & Click wizards to automate tasks in minutes.

Web Recorder: Use the ‘Record’ button to simply record your actions. The ‘Web Recorder’ tool uses SMART Automation Technology to account for website changes or web control position changes to ensure that recorded tasks continue to run smoothly.

Watch demo video here: https://www.automationanywhere.com/lrn/keyFeat/webRecorder.htm?r=examples

Types of testing tools with examples?


- Test Project Management - MS Project and Test Director
- Defect Management - Test Director, PVCS Defect Tracker, Bugzilla, and Rational Clear Quest
- Regression Test Automation - Win Runner, Rational Robot, Quick TestPro and QES Architect
- Coverage Management - Rational Requisite Pro and Mercury Test Director
- Performance Testing - Mercury LoadRunner, Rational Performance Studio and Compuware QA
- Configuration Management - Visual Source Safe and PVCS
- Test Data - Thinksoft Test Data Manager
- Dynamic Code Coverage - Rational pure coverage

Open source solns have lesser software flaws (by CIOL)?


The debate on the usage of open source technologies in security products is growing day-by-day. When most of the companies are using open source security products, a few companies are still evaluating to use open source applications to protect their IT products.

In an interaction with Abhigna NG of CIOL, Rahul Kopikar, Head - Business Development of Seclore, shared his view on the adoption of open source products by enterprises and the best practices what developers need to follow while developing open source applications. Excerpts:

CIOL: How safe is it to use open source applications with the increase in malware attacks?

Rahul Kopikar: It is pretty safe as long as the software has gone through stringent QC and testing. The notion that open source is prone to malware attacks is wrong. On the contrary, proprietary software are more prone to malware attacks because it goes to limited testing, whereas in open source software the whole worldwide community contributes and tests the system.

Read More at: https://www.ciol.com/Developer/Open-Source/Feature/Open-source-solns-have-lesser-software-flaws/152619/0/

Automation Test Tool Selection | Which automation tool is good for use ?


Before going for any automaton tool, make sure that the toll has the following featurees:

 - Significant reduction in time taken per testing iteration for future application releases
 - Savings in manpower & associated costs, owing to reduced manual testing efforts (potentially up to 80%).
 - Improved regression test coverage within short time frames
 - Flexibility, as individual modules can be tested independently
 - Radical improvement in consistency and uniformity of the testing process
 - Easy modification of reusable components; once created and benchmarked, the automation suite is flexible,    repeatable, and stable.

Test Automation Framework by DST Worldwide Services?


New testing platform supports quality and cost management associated with test scripts


KANSAS CITY, Mo., June 29, 2011 /PRNewswire/ -- DST Worldwide Services (DSTWS) has launched a new test automation framework, a leading-edge platform designed for automating functional and regression testing in system environments.


This solution was created using industry-standard process frameworks to provide comprehensive automation capabilities and address the key challenges of traditional test automation approaches. The test automation framework enables repeatability, re-usability and faster development of test scripts, supporting increased quality at decreased costs. This results in speedier time to market and enables subject matter experts to spend more time testing complex system functionality.

The test automation framework includes comprehensive logging and reporting capabilities, and has the ability to support multiple sets of data. It offers scheduling of test scenarios and can be easily integrated with testing tools such as QuickTest Professional, SilkTest, Selenium and Quality Center.

Read more at: https://www.prnewswire.com/news-releases/dst-worldwide-services-launches-test-automation-framework-124707638.html

System Thinking required for Developing Testing Workforce?


By Pradeep C | CEO - Edista Testing Institute

QAI participated in the SofTec 2011 conducted at Nimhans Convention centre, Bangalore on 2nd July 2011, and presented on the need for a system thinking approach for developing Testing Workforce to meet the growing demand of skilled testers. 

QAI participated in SOFTEC 2011. The conference which facilitates  at promoting Software related Test Experiences by bringing together software test professionals, practitioners, experts, academicians, and service/product vendors to share techniques, methodologies, frameworks, experiences, and case studies to perform, manage, and automate Software Testing. 

Speaking on the occasion, Mr. Pradeep C, Founder & CEO emphasized the challenges of the existing state of practice on the workforce selection, and the need for innovative Workforce Strategies for Creating Successful Test Organization. He engaged a highly interactive session on å±€orkforce strategies for creating successful test organizations. The leadership track session focused on questioning the existing paradigms for capability and capacity development, and highlighted how a change of perspective can provide an innovative answer for the current problems faced by Testing Heads. His presentation focused on the need for doing structured assessments to determine and focus on the areas of improvement at an Individual and an organizational level. Additionally, the assessments need to focus on identifying the role specific capabilities for an individual using adaptive, intelligent assessment methods. 

Read More at: https://www.prsafe.com/new_press_releases/view/3675 (By Pradeep C | CEO - Edista Testing Institute)

Simple Factors for Risk Based Software Testing?


( by - Rex Black) We work with  a number of clients to help them implement risk based testing.  It’s important to keep the process simple enough for broad-based participation by all stakeholders.  A major part of doing so is simplifying the process of assessing the level of risk associated with each risk item.

To do so, we recommend that stakeholders assess two factors for each risk item:

 - Likelihood.  Upon delivery for testing, how likely is the system to contain one or more bugs related to the risk item?
 - Impact.  If such bugs were not detected in testing and were delivered into production, how bad would the impact be?

Read More at: https://goo.gl/rR4c0 (by Rex Black)