Should Automated Acceptance Tests Use the GUI the Application Provides

Introduction

As a consultant and trainer, I am often asked by my clients and students how to deal with automated acceptance tests. One common question is whether automated acceptance tests should use the graphical user interface (GUI) provided by the application.

Consider the Context

Like all testing questions, this is a question that can only be answered well when we consider context – the missions and the givens of the project and the product. We do that by asking more questions.

  • What questions are we trying to ask and answer with automated tests?
  • To what extent has testing been done at lower levels of the application?
  • To what extent are we trying to confirm known behavior vs. trying to discover or investigate as-yet-unrecognized problems?
  • What tools do we have available? What tools will we have to obtain?
  • What is our team’s perceived level of experience and skills with the tools?
  • Has the application been developed for testability? Does it have an easily scriptable interface below the GUI? Does it provide log files?
  • Can we get changes to the product and support from the developers when the application is insufficiently testable (e.g. when HTML elements are missing id tags)?
  • How might we value speed, precision, and volume of tests? Is the GUI the place where we will obtain optimal automation value for those considerations?
  • To what extent do we need to model real users and observe what they observe? To what extent is automation capable of making such observations? What things will a conscious, conscientious human – a skilled tester – be likely to notice that automation might miss?
  • How might we create or use oracles that will allow the tool to help us to recognize a problem? (For more on oracles see Doug Hoffman’s three part article “Using Oracles in Testing and Test Automation” (1))
  • Is a need to vary the data that is being used in a productive way (systematically, pseudo-randomly, or randomly)?
  • How might we leverage GUI automation to drive the application quickly to a point where human testers can take over?
  • How might automated GUI tests drive us toward confirmation bias – that is, the tendency to design and execute tests that confirm existing beliefs about the product?
  • How might automated GUI tests lead us towards automation bias – the tendency to view results from automated processes as superior to human processes.

This might look like a long list, but you don’t have to answer these questions in great detail, nor do you have to get each one of them exactly right. A couple of moments of reflection on each one – a couple of minutes altogether – is likely rather than forgetting to consider it at all. If you are uncertain or stuck, a few more minutes of investigation or exploration could provide a huge return on investment. As you are in the middle of the project, continue to ask these questions periodically as a means of ensuring that the value of what you’re doing exceeds the cost.

Experience and Observation

In general, my experience and observation has been that as testing gets closer to the GUI, the cost of automating tests increases while the value derived from automated tests decreases.

Automated tests at the unit level tend to be:

  • Simpler to automate
  • Easier to comprehend and maintain
  • More subject to falsifiable assertions that machines can evaluate
  • Appropriately specific, where specificity matters
  • Revealing for problems that are easier to troubleshoot and debug
  • More immediately responsive to developers (since the developers tend to be the ones writing and running them)
  • Confirmatory, in a place where confirmation is more useful and important

Automated tests at the GUI level tend to be:

  • Complex and difficult to program
  • Hard to understand and maintain
  • Inadequate for recognizing problems that humans can easily identify
  • Overly specific in places where ambiguity can be tolerated by people
  • Revealing for problems that are more difficult to troubleshoot and debug
  • Less immediately responsive to developers
  • Confirmatory, at a level where investigation and discovery are more important

Now: I’ve said that this is a question that we can only answer well in context – but there is another question entirely: what do we mean by “acceptance tests”, and depending on what we mean, do we want to automate them at all? You might like to check “User Acceptance Testing – A Context-Driven Perspective” for some thoughts on that (2).

Michael Bolton

Michael Bolton provides training and consulting services in software testing and is a co-author with James Bach of Rapid Software Testing, a course and methodology on how to do testing more quickly, less expensively, and with excellent results. Contact Michael at mb@developsense.com.

Notes:

  1. Hoffman, Doug: “Using Oracles in Test Automation“. LogiGear Newsletter, 2006. Available free at:
    Using Oracles in Testing and Test Automation (Part 1 of 3);
    Using Oracles in Testing and Test Automation (Part 2 of 3);
    Using Oracles in Testing and Test Automation (Part 3 of 3).
  2. Bolton, Michael: “User Acceptance Testing – A Context-Driven Perspective”. Proceedings of the Pacific Northwest Software Quality Conference 2007, page 535.
Michael Bolton
Michael Bolton provides training and consulting services in software testing and is a co-author with James Bach of Rapid Software Testing, a course and methodology on how to do testing more quickly, less expensively, and with excellent results.

The Related Post

I feel like I’ve spent most of my career learning how to write good automated tests in an Agile environment. When I downloaded JUnit in the year 2000 it didn’t take long before I was hooked – unit tests for everything in sight. That gratifying green bar is near-instant feedback that everything is going as ...
TestArchitect TM is the name we have given to our automation toolset. It reflects the vision that automated testing requires a well-designed architectural plan allowing technical and non-technical elements to work fluidly in their capacity. It also addresses the continual missing link of all test automation tools of how to design tests. In TestArchitect the test ...
This article was developed from concepts in the book Global Software Test Automation: Discussion of Software Testing for Executives. Introduction There are many potential pitfalls to Manual Software Testing, including: Manual Testing is slow and costly. Manual tests do not scale well. Manual Testing is not consistent or repeatable. Lack of training. Testing is difficult ...
In order to make the right choices among tools, you must be able to classify them. Otherwise, any choice would be at best haphazard. Without functioning classification, you would not be able to understand new tools fast, nor come up with ideas of using, or creating new tools.
Every once in a while a book is put together that should be read by every person with a relationship to software development. This book is one of them. Everyone dreams of automating their software testing, but few make it a reality. This down-to-earth book contains stories of 28 teams that went for it, including ...
LogiGear Magazine – September 2010
As our world continues its digital transformation with excitement in the advancement and convergence of so many technologies- from AI, machine learning, big data and analytics, to device mesh connectivity, nor should we forget VR and AR- 2017 promises to be a year that further transforms the way we work, play and take care of ...
LogiGear Magazine – April 2014 – Test Tool and Automation
Source: From I.M.Testy (BJ Rollison’s blog) I just finished reading Implementing Automated Software Testing by E.Dustin, T. Garrett, and B. Gauf and overall this is a good read providing some well thought out arguments for beginning an automation project, and provides strategic perspectives to manage a test automation project. The first chapter made several excellent ...
LogiGear Magazine January Trends Issue 2017
September Issue 2018: The Secrets to Better Test Automation  
5 roadblocks in vehicular autonomy that complicate Software Testing Experts in the field have previously referred to air travel as somewhat of a gold standard for autonomous vehicle safety, but after Boeing’s two tragedies, that analogy can no longer be used when talking about self-driving cars. This was after Boeing’s 737 MAX Jets have found ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe