Effective Management of Test Automation Failures

In recent years, much attention has been paid to setting up Test Automation frameworks which are effective, easy to maintain, and allow the whole testing team to contribute to the testing effort. In doing so, we often leave out one of the most critical considerations of Test Automation: What do we do when the Test Automation doesn’t work correctly?

Testing teams need to develop a practical solution for determining who’s accountable for analyzing Test Automation failures, and ensure that the right processes and skills exist to effectively do the analysis.

There are 3 primary reasons why your Test Automation may not work correctly:

  1. There is an error in the automated test itself
  2. The application under test (AUT) has changed
  3. The Automation has uncovered a bug in the AUT

The first step whenever a failed test occurs in Test Automation is to figure out what happened. So who should be doing this?

Too often in testing organizations, it’s the case that as soon as a Test Engineer runs into a problem with the Test Automation, they simply tell the Automation Engineer “Hey, the Test Automation isn’t working!” The job of analysis then falls to the Automation Engineer, who is already overburdened with implementing/maintaining new and existing Test Automation.

How can we push this analysis ‘upstream’ to the Test Engineers who execute the Test Automation? In order to do this, we must first look at why the Test Engineers don’t feel that they can or should analyze the issues.

In a typical ‘scripting approach’ to Test Automation, the Test Engineers will first write a verbose test case, typically in Word, Excel, or some sort of in-house or 3rd party test case management tool. Once that task is completed, the Test Engineers effectively “throws it over the wall” to the Automation Engineer. The Automation Engineer thens create a scripted version of the test case, and “throw it back over the wall” to the Test Engineer, who then executes the automated test.

More often than not, the Test Engineer will not understand the scripted test very well. If something is broken, they rely on the Automation Engineer to figure out what went wrong. This situation violates the very principle of the four fundamental tasks that an experienced Test Engineer must be able to do:

  1. Design/write tests.
  2. Execute tests and identify/seek out failure.
  3. Analyze a failure for reproducibility and ideas to incorporate into new tests.
  4. Report a failure and/or bug.

At a minimum, the Test Engineer should be able to analyze the results of the automated tests, and figure out if a failure is due to an actual bug in the AUT. If there is no apparent bug, then the Test Engineer should be able to determine whether or not a change occurred in the application. Finally if there is no apparent bug or changes in the AUT, then they may confidently consider that the issue was caused by an error in the Automation.

So how can you empower the Test Engineer to analyze Test Automation failuress? It’s simple really. If your Test Engineers can create automated tests themselves, then they will be empowered to analyze those tests when they don’t work. In our experience, a Keyword-Driven Test Automation framework is the best way to enable your test engineers to effectively own the analysis of Test Automation failures.

With a properly implemented Keyword-Driven Test Automation framework, the analysis of a Test Automation failure consists of the following steps:

  1. Did the Test Automation uncover a bug in the AUT? (Done by a Test Engineer)
  2. Was the failure caused by a change in the AUT? (Done by a Test Engineer and/or Automation Engineer)
  3. Was the failure caused by an error in the Automation itself? (Done by an Automation Engineer)

With Keyword-Driven Test Automation, scripting is kept to a minimum, so most of your failures will occur due to bugs or changes in the AUT. Test Engineers should be able to do most of the failure analysis, freeing your Automation Engineers to focus more on creating new automated tests, and allowing you to further increase your test coverage, reduce testing time, decrease maintenance, and most importantly, create higher quality products!

Hung Nguyen

Hung Nguyen co-founded LogiGear in 1994, and is responsible for the company’s strategic direction and executive business management. His passion and relentless focus on execution and results has been the driver for the company’s innovative approach to software testing, test automation, testing tool solutions and testing education programs.

Hung is co-author of the top-selling book in the software testing field, “Testing Computer Software,” (Wiley, 2nd ed. 1993) and other publications including, “Testing Applications on the Web,” (Wiley, 1st ed. 2001, 2nd ed. 2003), and “Global Software Test Automation,” (HappyAbout Publishing, 2006). His experience prior to LogiGear includes leadership roles in software development, quality, product and business management at Spinnaker, PowerUp, Electronic Arts and Palm Computing.

Hung holds a Bachelor of Science in Quality Assurance from Cogswell Polytechnical College, and completed a Stanford Graduate School of Business Executive Program.

Hung Q. Nguyen
Hung Nguyen co-founded LogiGear in 1994, and is responsible for the company’s strategic direction and executive business management. His passion and relentless focus on execution and results has been the driver for the company’s innovative approach to software testing, test automation, testing tool solutions and testing education programs. Hung is co-author of the top-selling book in the software testing field, “Testing Computer Software,” (Wiley, 2nd ed. 1993) and other publications including, “Testing Applications on the Web,” (Wiley, 1st ed. 2001, 2nd ed. 2003), and “Global Software Test Automation,” (HappyAbout Publishing, 2006). His experience prior to LogiGear includes leadership roles in software development, quality, product and business management at Spinnaker, PowerUp, Electronic Arts and Palm Computing. Hung holds a Bachelor of Science in Quality Assurance from Cogswell Polytechnical College, and completed a Stanford Graduate School of Business Executive Program.
Hung Q. Nguyen on Linkedin

The Related Post

LogiGear Magazine – The Big Testing Issue – April 2012
LogiGear Magazine – April 2013 – Test Automation
The following is a transcript of a May 7, 2008 interview with Hung Q. Nguyen, founder and CEO of LogiGear Corporation and coauthor of the best selling textbook Testing Computer Software. Interviewer: When it comes to software testing, what concerns or issues are you hearing from software developers? Hung Q. Nguyen: The most pressing concern ...
The 12 Do’s and Don’ts of Test Automation When I started my career as a Software Tester a decade ago, Test Automation was viewed with some skepticism.
The path to continuous delivery leads through automation Software testing and verification needs a careful and diligent process of impersonating an end user, trying various usages and input scenarios, comparing and asserting expected behaviours. Directly, the words “careful and diligent” invoke the idea of letting a computer program do the job. Automating certain programmable aspects ...
Looking for a solution to test your voice apps across devices and platforms? Whether you’re new or experienced in testing voice apps such as Alexa skill or Google Home actions, this article will give you a holistic view of the challenges of executing software testing for voice-based apps. It also explores some of the basic ...
One of my current responsibilities is to find ways to automate, as much as practical, the ‘testing’ of the user experience (UX) for complex web-based applications. In my view, full test automation of UX is impractical and probably unwise; however, we can use automation to find potential UX problems, or undesirable effects, even in rich, ...
LogiGear Magazine – January 2011 – The Test Automation Issue
Developers of large data-intensive software often notice an interesting — though not surprising — phenomenon: When usage of an application jumps dramatically, components that have operated for months without trouble suddenly develop previously undetected errors. For example, the application may have been installed on a different OS-hardware-DBMS-networking platform, or newly added customers may have account ...
This book isn’t for everyone, but everyone can get some value out of it. What I mean by that rather confusing statement is that folks working in Agile environments will likely want to throw the book across the room while folks in more bureaucratic environments like CMMI or other waterfall environments will likely get a ...
People who know me and my work probably know my emphasis on good test design for successful test automation. I have written about this in “Key Success Factors for Keyword Driven Testing“. In the Action Based Testing (ABT) method that I have pioneered over the years it is an essential element for success. However, agreeing ...
Understanding the benefits and challenges of Automating ERP is critical. According to SAP, ERP (Enterprise Resource Planning) “is the core processes that are needed to run a company: finance, human resources, manufacturing, supply chain, services, procurement, and others. At its most basic level, ERP integrates these processes into a single system. But new ERP systems ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe