Should Automated Acceptance Tests Use the GUI the Application Provides

Introduction

As a consultant and trainer, I am often asked by my clients and students how to deal with automated acceptance tests. One common question is whether automated acceptance tests should use the graphical user interface (GUI) provided by the application.

Consider the Context

Like all testing questions, this is a question that can only be answered well when we consider context – the missions and the givens of the project and the product. We do that by asking more questions.

  • What questions are we trying to ask and answer with automated tests?
  • To what extent has testing been done at lower levels of the application?
  • To what extent are we trying to confirm known behavior vs. trying to discover or investigate as-yet-unrecognized problems?
  • What tools do we have available? What tools will we have to obtain?
  • What is our team’s perceived level of experience and skills with the tools?
  • Has the application been developed for testability? Does it have an easily scriptable interface below the GUI? Does it provide log files?
  • Can we get changes to the product and support from the developers when the application is insufficiently testable (e.g. when HTML elements are missing id tags)?
  • How might we value speed, precision, and volume of tests? Is the GUI the place where we will obtain optimal automation value for those considerations?
  • To what extent do we need to model real users and observe what they observe? To what extent is automation capable of making such observations? What things will a conscious, conscientious human – a skilled tester – be likely to notice that automation might miss?
  • How might we create or use oracles that will allow the tool to help us to recognize a problem? (For more on oracles see Doug Hoffman’s three part article “Using Oracles in Testing and Test Automation” (1))
  • Is a need to vary the data that is being used in a productive way (systematically, pseudo-randomly, or randomly)?
  • How might we leverage GUI automation to drive the application quickly to a point where human testers can take over?
  • How might automated GUI tests drive us toward confirmation bias – that is, the tendency to design and execute tests that confirm existing beliefs about the product?
  • How might automated GUI tests lead us towards automation bias – the tendency to view results from automated processes as superior to human processes.

This might look like a long list, but you don’t have to answer these questions in great detail, nor do you have to get each one of them exactly right. A couple of moments of reflection on each one – a couple of minutes altogether – is likely rather than forgetting to consider it at all. If you are uncertain or stuck, a few more minutes of investigation or exploration could provide a huge return on investment. As you are in the middle of the project, continue to ask these questions periodically as a means of ensuring that the value of what you’re doing exceeds the cost.

Experience and Observation

In general, my experience and observation has been that as testing gets closer to the GUI, the cost of automating tests increases while the value derived from automated tests decreases.

Automated tests at the unit level tend to be:

  • Simpler to automate
  • Easier to comprehend and maintain
  • More subject to falsifiable assertions that machines can evaluate
  • Appropriately specific, where specificity matters
  • Revealing for problems that are easier to troubleshoot and debug
  • More immediately responsive to developers (since the developers tend to be the ones writing and running them)
  • Confirmatory, in a place where confirmation is more useful and important

Automated tests at the GUI level tend to be:

  • Complex and difficult to program
  • Hard to understand and maintain
  • Inadequate for recognizing problems that humans can easily identify
  • Overly specific in places where ambiguity can be tolerated by people
  • Revealing for problems that are more difficult to troubleshoot and debug
  • Less immediately responsive to developers
  • Confirmatory, at a level where investigation and discovery are more important

Now: I’ve said that this is a question that we can only answer well in context – but there is another question entirely: what do we mean by “acceptance tests”, and depending on what we mean, do we want to automate them at all? You might like to check “User Acceptance Testing – A Context-Driven Perspective” for some thoughts on that (2).

Michael Bolton

Michael Bolton provides training and consulting services in software testing and is a co-author with James Bach of Rapid Software Testing, a course and methodology on how to do testing more quickly, less expensively, and with excellent results. Contact Michael at mb@developsense.com.

Notes:

  1. Hoffman, Doug: “Using Oracles in Test Automation“. LogiGear Newsletter, 2006. Available free at:
    Using Oracles in Testing and Test Automation (Part 1 of 3);
    Using Oracles in Testing and Test Automation (Part 2 of 3);
    Using Oracles in Testing and Test Automation (Part 3 of 3).
  2. Bolton, Michael: “User Acceptance Testing – A Context-Driven Perspective”. Proceedings of the Pacific Northwest Software Quality Conference 2007, page 535.
Michael Bolton
Michael Bolton provides training and consulting services in software testing and is a co-author with James Bach of Rapid Software Testing, a course and methodology on how to do testing more quickly, less expensively, and with excellent results.

The Related Post

Introduction A characteristic of data warehouse (DW) development is the frequent release of high-quality data for user feedback and acceptance. At the end of each iteration of DW ETLs (Extract-Transform-Load), data tables are expected to be of sufficient quality for the next ETL phase. This objective requires a unique approach to quality assurance methods and ...
Recently while teaching a workshop on Testing Dirty Systems, I uttered this “Randyism” off the top of my head, “Test automation is not automatic.” I realized immediately that I had just concisely stated the problem in making test automation a reality in many organizations. Most testers know that test automation is not automatic. (Wouldn’t it be great?) However, ...
We’re celebrating the 1st birthday of our Agile eBook! It has been one year since we launched our eBook on Agile Automation. To celebrate, we’ve updated the foreword and included a brand new automation checklist! As we take the moment to mark this occasion, we wanted to take some time to reflect on the State ...
The following is a transcript of a May 7, 2008 interview with Hung Q. Nguyen, founder and CEO of LogiGear Corporation and coauthor of the best selling textbook Testing Computer Software. Interviewer: When it comes to software testing, what concerns or issues are you hearing from software developers? Hung Q. Nguyen: The most pressing concern ...
I got some comments on my post “Test Everything all the Time” — most notably people commenting that it’s impossible to test “everything”. I can’t agree more. The intention of the post was to make the point that we need to be able to test “everything we can” all the time. That is, you should ...
I recently came back from the Software Testing & Evaluation Summit in Washington, DC hosted by the National Defense Industrial Association. The objective of the workshop is to help recommend policy and guidance changes to the Defense enterprise, focusing on improving practice and productivity of software testing and evaluation (T&E) approaches in Defense acquisition.
LogiGear Magazine – April 2013 – Test Automation
Based in Alberta, Canada, Jonathan Kohl takes time out of his busy schedule to discuss his views on software testing and automation.
Many organizations rely on HP Quality Center to design test plans and track test results. TestArchitect’s Quality Center integration makes working with QC as easy as pie. TestArchitect (TA) is a three-in-one tool for Test Management, Test Development, and Test Automation. Users can create and manage test assets, execute tests, track and analyze test results, ...
*You can check the answer key here
The growing complexity of the Human-Machine Interface (HMI) in cars offers traditional testers an opportunity to capitalize on their strengths. The human-machine interface (HMI) is nothing new. Any user interface including a graphical user interface (GUI) falls under the category of human-machine interface. HMI is more commonly being used to mean a view into the ...
Elfriede Dustin of Innovative Defense Technology, is the author of various books including Automated Software Testing, Quality Web Systems, and her latest book Effective Software Testing. Dustin discusses her views on test design, scaling automation and the current state of test automation tools. LogiGear: With Test Design being an important ingredient to successful test automation, ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe