Effective Management of Test Automation Failures

In recent years, much attention has been paid to setting up Test Automation frameworks which are effective, easy to maintain, and allow the whole testing team to contribute to the testing effort. In doing so, we often leave out one of the most critical considerations of Test Automation: What do we do when the Test Automation doesn’t work correctly?

Testing teams need to develop a practical solution for determining who’s accountable for analyzing Test Automation failures, and ensure that the right processes and skills exist to effectively do the analysis.

There are 3 primary reasons why your Test Automation may not work correctly:

  1. There is an error in the automated test itself
  2. The application under test (AUT) has changed
  3. The Automation has uncovered a bug in the AUT

The first step whenever a failed test occurs in Test Automation is to figure out what happened. So who should be doing this?

Too often in testing organizations, it’s the case that as soon as a Test Engineer runs into a problem with the Test Automation, they simply tell the Automation Engineer “Hey, the Test Automation isn’t working!” The job of analysis then falls to the Automation Engineer, who is already overburdened with implementing/maintaining new and existing Test Automation.

How can we push this analysis ‘upstream’ to the Test Engineers who execute the Test Automation? In order to do this, we must first look at why the Test Engineers don’t feel that they can or should analyze the issues.

In a typical ‘scripting approach’ to Test Automation, the Test Engineers will first write a verbose test case, typically in Word, Excel, or some sort of in-house or 3rd party test case management tool. Once that task is completed, the Test Engineers effectively “throws it over the wall” to the Automation Engineer. The Automation Engineer thens create a scripted version of the test case, and “throw it back over the wall” to the Test Engineer, who then executes the automated test.

More often than not, the Test Engineer will not understand the scripted test very well. If something is broken, they rely on the Automation Engineer to figure out what went wrong. This situation violates the very principle of the four fundamental tasks that an experienced Test Engineer must be able to do:

  1. Design/write tests.
  2. Execute tests and identify/seek out failure.
  3. Analyze a failure for reproducibility and ideas to incorporate into new tests.
  4. Report a failure and/or bug.

At a minimum, the Test Engineer should be able to analyze the results of the automated tests, and figure out if a failure is due to an actual bug in the AUT. If there is no apparent bug, then the Test Engineer should be able to determine whether or not a change occurred in the application. Finally if there is no apparent bug or changes in the AUT, then they may confidently consider that the issue was caused by an error in the Automation.

So how can you empower the Test Engineer to analyze Test Automation failuress? It’s simple really. If your Test Engineers can create automated tests themselves, then they will be empowered to analyze those tests when they don’t work. In our experience, a Keyword-Driven Test Automation framework is the best way to enable your test engineers to effectively own the analysis of Test Automation failures.

With a properly implemented Keyword-Driven Test Automation framework, the analysis of a Test Automation failure consists of the following steps:

  1. Did the Test Automation uncover a bug in the AUT? (Done by a Test Engineer)
  2. Was the failure caused by a change in the AUT? (Done by a Test Engineer and/or Automation Engineer)
  3. Was the failure caused by an error in the Automation itself? (Done by an Automation Engineer)

With Keyword-Driven Test Automation, scripting is kept to a minimum, so most of your failures will occur due to bugs or changes in the AUT. Test Engineers should be able to do most of the failure analysis, freeing your Automation Engineers to focus more on creating new automated tests, and allowing you to further increase your test coverage, reduce testing time, decrease maintenance, and most importantly, create higher quality products!

Hung Nguyen

Hung Nguyen co-founded LogiGear in 1994, and is responsible for the company’s strategic direction and executive business management. His passion and relentless focus on execution and results has been the driver for the company’s innovative approach to software testing, test automation, testing tool solutions and testing education programs.

Hung is co-author of the top-selling book in the software testing field, “Testing Computer Software,” (Wiley, 2nd ed. 1993) and other publications including, “Testing Applications on the Web,” (Wiley, 1st ed. 2001, 2nd ed. 2003), and “Global Software Test Automation,” (HappyAbout Publishing, 2006). His experience prior to LogiGear includes leadership roles in software development, quality, product and business management at Spinnaker, PowerUp, Electronic Arts and Palm Computing.

Hung holds a Bachelor of Science in Quality Assurance from Cogswell Polytechnical College, and completed a Stanford Graduate School of Business Executive Program.

Hung Q. Nguyen
Hung Nguyen co-founded LogiGear in 1994, and is responsible for the company’s strategic direction and executive business management. His passion and relentless focus on execution and results has been the driver for the company’s innovative approach to software testing, test automation, testing tool solutions and testing education programs. Hung is co-author of the top-selling book in the software testing field, “Testing Computer Software,” (Wiley, 2nd ed. 1993) and other publications including, “Testing Applications on the Web,” (Wiley, 1st ed. 2001, 2nd ed. 2003), and “Global Software Test Automation,” (HappyAbout Publishing, 2006). His experience prior to LogiGear includes leadership roles in software development, quality, product and business management at Spinnaker, PowerUp, Electronic Arts and Palm Computing. Hung holds a Bachelor of Science in Quality Assurance from Cogswell Polytechnical College, and completed a Stanford Graduate School of Business Executive Program.
Hung Q. Nguyen on Linkedin

The Related Post

An Overview of Four Methods for Systematic Test Design Strategy Many people test, but few people use the well-known black-box and white-box test design techniques. The technique most used, however, seems to be testing randomly chosen valid values, followed by error guessing, exploratory testing and the like. Could it be that the more systematic test ...
LogiGear Magazine – April 2013 – Test Automation
It can be complicated to automate model-based testing. Here’s how to employ action words to get the job done.
LogiGear Magazine – March 2011 – The Agile Test Automation Issue
Has this ever happened to you: You’ve been testing for a while, perhaps building off of a branch, only to find out that, after all of this time, there is something big wrong. It’s a bad build and now you have to go backwards, fix something, and get a new build. Basically, you just wasted ...
An automation framework is a way to organize your code in meaningful manner so that any person who is working with you can understand what each file contains. Automation frameworks differ based on how you organize your code – it can be organized based on your data, so that any person who wants to use ...
September Issue 2018: The Secrets to Better Test Automation  
LogiGear Magazine – April 2014 – Test Tool and Automation
How lagging automotive design principles adversely affect final products. Cars are integrating more and more software with every model year. The ginormous screen introduced by Tesla in their flagship Model S a few years ago was seemingly unrivaled at the time. Nowadays, screens of this size are not only commonplace in vehicles such as the ...
Cross-Browser Testing is an integral part of the Software Testing world today. When we need to test the functionality of a website or web application, we need to do so on multiple browsers for a multitude of reasons.
LogiGear Magazine, December 2015: Test Automation
All too often, software development organizations look at automating software testing as a means of executing existing test cases faster. Very frequently there is no strategic or methodological underpinning to such an effort. The approach is one of running test cases faster is better, which will help to deliver software faster. Even in organizations that ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe