Should Automated Acceptance Tests Use the GUI the Application Provides

Introduction

As a consultant and trainer, I am often asked by my clients and students how to deal with automated acceptance tests. One common question is whether automated acceptance tests should use the graphical user interface (GUI) provided by the application.

Consider the Context

Like all testing questions, this is a question that can only be answered well when we consider context – the missions and the givens of the project and the product. We do that by asking more questions.

  • What questions are we trying to ask and answer with automated tests?
  • To what extent has testing been done at lower levels of the application?
  • To what extent are we trying to confirm known behavior vs. trying to discover or investigate as-yet-unrecognized problems?
  • What tools do we have available? What tools will we have to obtain?
  • What is our team’s perceived level of experience and skills with the tools?
  • Has the application been developed for testability? Does it have an easily scriptable interface below the GUI? Does it provide log files?
  • Can we get changes to the product and support from the developers when the application is insufficiently testable (e.g. when HTML elements are missing id tags)?
  • How might we value speed, precision, and volume of tests? Is the GUI the place where we will obtain optimal automation value for those considerations?
  • To what extent do we need to model real users and observe what they observe? To what extent is automation capable of making such observations? What things will a conscious, conscientious human – a skilled tester – be likely to notice that automation might miss?
  • How might we create or use oracles that will allow the tool to help us to recognize a problem? (For more on oracles see Doug Hoffman’s three part article “Using Oracles in Testing and Test Automation” (1))
  • Is a need to vary the data that is being used in a productive way (systematically, pseudo-randomly, or randomly)?
  • How might we leverage GUI automation to drive the application quickly to a point where human testers can take over?
  • How might automated GUI tests drive us toward confirmation bias – that is, the tendency to design and execute tests that confirm existing beliefs about the product?
  • How might automated GUI tests lead us towards automation bias – the tendency to view results from automated processes as superior to human processes.

This might look like a long list, but you don’t have to answer these questions in great detail, nor do you have to get each one of them exactly right. A couple of moments of reflection on each one – a couple of minutes altogether – is likely rather than forgetting to consider it at all. If you are uncertain or stuck, a few more minutes of investigation or exploration could provide a huge return on investment. As you are in the middle of the project, continue to ask these questions periodically as a means of ensuring that the value of what you’re doing exceeds the cost.

Experience and Observation

In general, my experience and observation has been that as testing gets closer to the GUI, the cost of automating tests increases while the value derived from automated tests decreases.

Automated tests at the unit level tend to be:

  • Simpler to automate
  • Easier to comprehend and maintain
  • More subject to falsifiable assertions that machines can evaluate
  • Appropriately specific, where specificity matters
  • Revealing for problems that are easier to troubleshoot and debug
  • More immediately responsive to developers (since the developers tend to be the ones writing and running them)
  • Confirmatory, in a place where confirmation is more useful and important

Automated tests at the GUI level tend to be:

  • Complex and difficult to program
  • Hard to understand and maintain
  • Inadequate for recognizing problems that humans can easily identify
  • Overly specific in places where ambiguity can be tolerated by people
  • Revealing for problems that are more difficult to troubleshoot and debug
  • Less immediately responsive to developers
  • Confirmatory, at a level where investigation and discovery are more important

Now: I’ve said that this is a question that we can only answer well in context – but there is another question entirely: what do we mean by “acceptance tests”, and depending on what we mean, do we want to automate them at all? You might like to check “User Acceptance Testing – A Context-Driven Perspective” for some thoughts on that (2).

Michael Bolton

Michael Bolton provides training and consulting services in software testing and is a co-author with James Bach of Rapid Software Testing, a course and methodology on how to do testing more quickly, less expensively, and with excellent results. Contact Michael at mb@developsense.com.

Notes:

  1. Hoffman, Doug: “Using Oracles in Test Automation“. LogiGear Newsletter, 2006. Available free at:
    Using Oracles in Testing and Test Automation (Part 1 of 3);
    Using Oracles in Testing and Test Automation (Part 2 of 3);
    Using Oracles in Testing and Test Automation (Part 3 of 3).
  2. Bolton, Michael: “User Acceptance Testing – A Context-Driven Perspective”. Proceedings of the Pacific Northwest Software Quality Conference 2007, page 535.
Michael Bolton
Michael Bolton provides training and consulting services in software testing and is a co-author with James Bach of Rapid Software Testing, a course and methodology on how to do testing more quickly, less expensively, and with excellent results.

The Related Post

Introduction In many of the Test Automation projects that we are involved with using our Action-Based Testing methodology, management has expressed a need to relate tests and test results to system requirements. The underlying thought is that automation will create extra possibilities to control the level of compliance to requirements of the system under test. ...
Jenkins is a Continuous Integration (CI) tool that controls repeatable tasks in software development. Check out this guide to see how TestArchitect seamlessly integrates with Jenkins to establish a CI environment for Automated Testing.
LogiGear Magazine January Trends Issue 2017
Investing in Test Automation training will increase your team’s productivity. The availability of reliable jobs in a competitive US market seems to be constantly embattled with competition and replacements of artificial intelligence (AI). In 2016, Foxconn replaced 60,000 employees with robots. However, the growth of Test Automation as an occupation has highlighted an intriguing option ...
As our world continues its digital transformation with excitement in the advancement and convergence of so many technologies- from AI, machine learning, big data and analytics, to device mesh connectivity, nor should we forget VR and AR- 2017 promises to be a year that further transforms the way we work, play and take care of ...
Introduction A characteristic of data warehouse (DW) development is the frequent release of high-quality data for user feedback and acceptance. At the end of each iteration of DW ETLs (Extract-Transform-Load), data tables are expected to be of sufficient quality for the next ETL phase. This objective requires a unique approach to quality assurance methods and ...
The success of Automation is often correlated to its ROI. Here are 5 KPIs that we find universally applicable when it comes to quanitfying your Test Automation.
When configured with a Python harness, TestArchitect can be used to automate testing on software for custom hardware Unlike other proprietary and open source tools, that are able to automate only desktop, or mobile, TestArchitect (TA Test) has the ability to test the software that runs on hardware in the following ways: 1. TA can ...
Based in Alberta, Canada, Jonathan Kohl takes time out of his busy schedule to discuss his views on software testing and automation.
In recent years, much attention has been paid to setting up Test Automation frameworks which are effective, easy to maintain, and allow the whole testing team to contribute to the testing effort. In doing so, we often leave out one of the most critical considerations of Test Automation: What do we do when the Test ...
Has this ever happened to you: You’ve been testing for a while, perhaps building off of a branch, only to find out that, after all of this time, there is something big wrong. It’s a bad build and now you have to go backwards, fix something, and get a new build. Basically, you just wasted ...
Developers of large data-intensive software often notice an interesting — though not surprising — phenomenon: When usage of an application jumps dramatically, components that have operated for months without trouble suddenly develop previously undetected errors. For example, the application may have been installed on a different OS-hardware-DBMS-networking platform, or newly added customers may have account ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe