Why is software testing so critical?


The answer is simple. Software bugs and errors are so widespread and so detrimental that they cost the US economy an estimated 0.6 percent of the gross domestic product. This amount approximately translates into a whopping $100 billion annually. Of this amount, half the costs are borne by the users and the other half by the software developers and software vendors. We must remember that nearly every business and industry in the United States depends on development, marketing, and after-sales support of software services and products. A study conducted by the Department of Commerce's


National Institute of Standards and Technology (NIST) has assessed that more than a third of the costs can be eliminated by improved software investigating infrastructure consisting of a paraphernalia of testing tools for load testing, stress testing, and performance testing.

Read more at: http://goo.gl/MIwfZ

Resuire Software Testers - Performance Testing | Job Location Gurgaon ?


Qualification : B.Tech/B.E., Bachelor of Computer Applications (B.C.A.), Master of Computer Applications (M.C.A.), Master of Technology (M.Tech/ME)
Experience :	Min  (2) Year  Max  (5) Year
Job Description :	QA Engineers with 2-5 years of experience in performance analysis, performance monitoring, automated scripting for performance data logging preferred. Exposure to load management tools, hand-on skills in adapting open source tools will be an asset.
Functional Area : IT / Telecom - Software
Location : Gurgaon
Country : India


Apply-> jobs@movicotech.com (with Subject as "Resume for Performance Tester"

Required Software Tester (Quality Engineer) in Gurgaon ?


Cvent is looking for a talented quality engineer to join our Technology team, which designs, develops and operates large-scale, Web-based applications. The Quality Engineer ensures the quality and integrity of the application. This position plays an integral role in application usability, providing product feedback at all stages of the development life cycle. This is an entry-level position with significant opportunity for career growth.
Star performers in this profile receive a unique opportunity to visit the Cvent Headquarters office in the US, for 3 months of further learning and career development.


Position Duties
· Write and execute test plans for the application
· Review and assist in the development of test plans being prepared by others
· Document defects and work with engineers to resolve issues
· Provide usability feedback to the product team
· Assist in applying application standards


Candidate Requirements:
· B. E.\B Tech\MCA  (Must)
· 0– 1 years of relevant experience
· Excellent problem solving and analytical skills
· Superior attention to detail
· Interest in technology and a hunger for learning
· Ability to work independently and as part of a team
· Knowledge of relational databases and Microsoft Office required
· Knowledge of SQL, Software Development Lifecycle, HTML, XML, and automated testing tools a plus

Email your resume at: careersindia@cvent.com

Crowd Sourced Testing ?


(By Rajini Padmanaban, Director of Engagement, Global Testing Services)

Given the global distribution of software and how internet is bringing the world together, community based testing activities have been gaining a lot of momentum in the recent years. Such activities could be forum discussions, beta testing efforts, crowd sourced testing etc. Of specific interest in this blog is to see what crowd sourced testing is and when can this model be leveraged to yield success

In simple terms, crowd sourced testing is leveraging the community at large to test a given product. This is the community that spans people from diverse cultures, geographies, languages, walks of life who test the given software, putting the software to use under very realistic scenarios which a tester in the core test team may not be able to think of, given his / her limited bounds of operation.

Read More at: http://www.qainfotech.com/blog/2011/06/crowd-sourced-testing-is-it-really-for-you/

A Note on globalization testing?



The goal of globalization testing is to detect potential problems in application design that could inhibit globalization. It makes sure that the code can handle all international support without breaking functionality that would cause either data loss or display problems. Globalization testing checks proper functionality of the product with any of the culture/locale settings using every type of international input possible. 

Proper functionality of the product assumes both a stable component that works according to design specification, regardless of international environment settings or cultures/locales, and the correct representation of data.

The following must be part of your globalization-testing plan:
Decide the priority of each component

To make globalization testing more effective, assign a testing priority to all tested components. Components that should receive top priority:
Support text data in the ANSI (American National Standards Institute) format
Extensively handle strings (for example, components with many edit controls)
Use files for data storage or data exchange (e.g., Windows metafiles, security configuration tools, and Web-based tools)

Source: http://www.software-testing-india.info/globalization-testing.html

A note on USABILITY TESTING ?


conVisitors to your companies Website may have a wide range of Internet experience and, consequently, have different expectations which must be fulfilled to win them over. While experienced users look for implementation of industry norms, newcomers need guidance to surf through the unfamiliar Web environment. 

Failure to cater to such expectations is likely to result into lost sales, as visitors are unable to locate what they are looking for or unable to complete transactions. 

Usability testing starts by identifying specific demographic groups within the target audience, taking into account their age, profession, cultural background, level of Internet exposure and many other relevant factors. 

Goals of usability testing

Usability testing is a black-box testing technique. The aim is to observe people using the product to discover errors and areas of improvement. Usability testing generally involves measuring how well test subjects respond in four areas: efficiency, accuracy, recall, and emotional response. The results of the first test can be treated as a baseline or control measurement; all subsequent tests can then be compared to the baseline to indicate improvement.

1. Performance -- How much time, and how many steps, are required for people to complete basic tasks? (For example, find something to buy, create a new account, and order the item.)

2. Accuracy -- How many mistakes did people make? (And were they fatal or recoverable with the right information?)
3. Recall -- How much does the person remember afterwards or after periods of non-use?
4. Emotional response -- How does the person feel about the tasks completed? Is the person confident, stressed? Would the user recommend this system to a friend?

Emerging Trends in Security Testing?


(by APP Labs) Today the application security testing space is not what it used to be. There are several trends that are affecting the development and testing of next generation applications from a security perspective. The three towering facets that are rewriting the conventional path taken for security testing are – Cloud, Mobile and Rich Internet Application (RIA) platforms.

RIAs challenge traditional application security testing tools, which tend to focus on testing the web server side of the application. With RIA, the client side of the application logic has become equally important, if not more and has to be tested as well. This is bringing in new tides of challenges.

Cloud platforms will require application security testing tools to evolve to support the testing of applications built for specific cloud platforms, and built using cloud-specific languages and frameworks. The other disruption cloud platforms are driving demand testing to support XML-based APIs used to reach out and consume cloud-based services.

Read More - http://blog.applabs.com/index.php/2011/01/emerging-trends-in-security-testing/

SYSTEM AND USER ACCEPTANCE TESTING ?


System testing usually refers to the testing of a specific system in a controlled environment to ensure that it will perform as expected and as required. From a Systems Development perspective, the term System Testing refers to the testing performed by the development team (programmers and other technicians) to ensure that the system works module by module (unit testing) and also as a whole. 

System Testing should ensure that each function of the system works as expected and all errors (bugs) are detected and analysed. It should also ensure that interfaces for export and import routines will function as required. After meeting the criteria of the Test Plan, the software moves to the next phase of quality check and undergoes User Acceptance Testing.

User Acceptance Testing: UAT refers to the test procedures which lead to formal 'acceptance' of new or changed systems. User Acceptance Testing is a critical phase of any project and requires significant participation of 'End Users'. An Acceptance Test Plan is also developed detailing the means by which 'Acceptance' will be achieved. The final part of the UAT can also include a parallel run to compare the new system against the current one. 

The User Acceptance Test Plan will vary from system to system but, in general, the testing should be planned in order to provide realistic and adequate exposure. The testing can be based upon User Requirements Specifications to which the system should conform. However, problems will continue to arise and it is important to determine what will be the expected and required responses from various parties concerned including Users, Project Team, Vendors and possibly Consultants/Contractors. 

Don’t Measure All Software Defects Equally?


(By App Labs) Quality just cannot be build into a software/application right before it gets launched, it must be part of the software life cycle right from the requirements to the production phase. With the growing prominence on software quality, most of the enterprises are investing in advanced tools, processes, and people, and more so on testing and quality assurance. But, the growing need to develop and update applications faster to stay ahead of the competition and stringent project deadlines, constrict enterprises from accommodating enough time for testing.

While resolving all defects is quite important the kind of effort can be varied based on the priority of the defects. All defects may not have the same impact on the application, and hence smarter testing based on the defect severity is what is needed for enterprises to improve quality while meeting the time 

Read More at: http://blog.applabs.com/index.php/2011/03/dont-measure-all-software-defects-equally/

Web Testing with Automation anywhere?


Businesses and applications today are increasingly moving to web based systems. Time tracking systems, CRM, HR and payroll systems, financial software, materials management, order tracking systems and report generation, everything is web based.

Automation Anywhere can automate all web based processes without any programming; from simple, online form-filling to more complicated tasks like data transfer, web data extraction, image recognition or process automation.

SMART Automation Technology of "Automation Anywhere" offers over 180+ powerful actions for web automation. Automation Anywhere works with any website, even complex websites using java, javascript, AJAX, Flash or iFrames. Agent-less remote deployment allows automated task to be run over various machines on the network. Our advanced Web Recorder ensures accurate re-runs taking into account website changes.

Automation Anywhere offers 2 easy options to automate web tasks: Use our powerful Web Recorder or use the editor with Point & Click wizards to automate tasks in minutes.

Web Recorder: Use the ‘Record’ button to simply record your actions. The ‘Web Recorder’ tool uses SMART Automation Technology to account for website changes or web control position changes to ensure that recorded tasks continue to run smoothly.

Watch demo video here: http://www.automationanywhere.com/lrn/keyFeat/webRecorder.htm?r=examples

Types of testing tools with examples?


- Test Project Management - MS Project and Test Director
- Defect Management - Test Director, PVCS Defect Tracker, Bugzilla, and Rational Clear Quest
- Regression Test Automation - Win Runner, Rational Robot, Quick TestPro and QES Architect
- Coverage Management - Rational Requisite Pro and Mercury Test Director
- Performance Testing - Mercury LoadRunner, Rational Performance Studio and Compuware QA
- Configuration Management - Visual Source Safe and PVCS
- Test Data - Thinksoft Test Data Manager
- Dynamic Code Coverage - Rational pure coverage

Open source solns have lesser software flaws (by CIOL)?


The debate on the usage of open source technologies in security products is growing day-by-day. When most of the companies are using open source security products, a few companies are still evaluating to use open source applications to protect their IT products.

In an interaction with Abhigna NG of CIOL, Rahul Kopikar, Head - Business Development of Seclore, shared his view on the adoption of open source products by enterprises and the best practices what developers need to follow while developing open source applications. Excerpts:

CIOL: How safe is it to use open source applications with the increase in malware attacks?

Rahul Kopikar: It is pretty safe as long as the software has gone through stringent QC and testing. The notion that open source is prone to malware attacks is wrong. On the contrary, proprietary software are more prone to malware attacks because it goes to limited testing, whereas in open source software the whole worldwide community contributes and tests the system.

Read More at: http://www.ciol.com/Developer/Open-Source/Feature/Open-source-solns-have-lesser-software-flaws/152619/0/

Automation Test Tool Selection | Which automation tool is good for use ?


Before going for any automaton tool, make sure that the toll has the following featurees:

 - Significant reduction in time taken per testing iteration for future application releases
 - Savings in manpower & associated costs, owing to reduced manual testing efforts (potentially up to 80%).
 - Improved regression test coverage within short time frames
 - Flexibility, as individual modules can be tested independently
 - Radical improvement in consistency and uniformity of the testing process
 - Easy modification of reusable components; once created and benchmarked, the automation suite is flexible,    repeatable, and stable.

Test Automation Framework by DST Worldwide Services?


New testing platform supports quality and cost management associated with test scripts


KANSAS CITY, Mo., June 29, 2011 /PRNewswire/ -- DST Worldwide Services (DSTWS) has launched a new test automation framework, a leading-edge platform designed for automating functional and regression testing in system environments.


This solution was created using industry-standard process frameworks to provide comprehensive automation capabilities and address the key challenges of traditional test automation approaches. The test automation framework enables repeatability, re-usability and faster development of test scripts, supporting increased quality at decreased costs. This results in speedier time to market and enables subject matter experts to spend more time testing complex system functionality.

The test automation framework includes comprehensive logging and reporting capabilities, and has the ability to support multiple sets of data. It offers scheduling of test scenarios and can be easily integrated with testing tools such as QuickTest Professional, SilkTest, Selenium and Quality Center.

Read more at: http://www.prnewswire.com/news-releases/dst-worldwide-services-launches-test-automation-framework-124707638.html

System Thinking required for Developing Testing Workforce?


By Pradeep C | CEO - Edista Testing Institute

QAI participated in the SofTec 2011 conducted at Nimhans Convention centre, Bangalore on 2nd July 2011, and presented on the need for a system thinking approach for developing Testing Workforce to meet the growing demand of skilled testers. 

QAI participated in SOFTEC 2011. The conference which facilitates  at promoting Software related Test Experiences by bringing together software test professionals, practitioners, experts, academicians, and service/product vendors to share techniques, methodologies, frameworks, experiences, and case studies to perform, manage, and automate Software Testing. 

Speaking on the occasion, Mr. Pradeep C, Founder & CEO emphasized the challenges of the existing state of practice on the workforce selection, and the need for innovative Workforce Strategies for Creating Successful Test Organization. He engaged a highly interactive session on 層orkforce strategies for creating successful test organizations. The leadership track session focused on questioning the existing paradigms for capability and capacity development, and highlighted how a change of perspective can provide an innovative answer for the current problems faced by Testing Heads. His presentation focused on the need for doing structured assessments to determine and focus on the areas of improvement at an Individual and an organizational level. Additionally, the assessments need to focus on identifying the role specific capabilities for an individual using adaptive, intelligent assessment methods. 

Read More at: http://www.prsafe.com/new_press_releases/view/3675 (By Pradeep C | CEO - Edista Testing Institute)

Simple Factors for Risk Based Software Testing?


( by - Rex Black) We work with  a number of clients to help them implement risk based testing.  It’s important to keep the process simple enough for broad-based participation by all stakeholders.  A major part of doing so is simplifying the process of assessing the level of risk associated with each risk item.

To do so, we recommend that stakeholders assess two factors for each risk item:

 - Likelihood.  Upon delivery for testing, how likely is the system to contain one or more bugs related to the risk item?
 - Impact.  If such bugs were not detected in testing and were delivered into production, how bad would the impact be?

Read More at: http://goo.gl/rR4c0 (by Rex Black)