Pushing the Boundaries of Test Automation: an Overview of How to Automate the UX with Heuristics

One of my current responsibilities is to find ways to automate, as much as practical, the ‘testing’ of the user experience (UX) for complex web-based applications. In my view, full test automation of UX is impractical and probably unwise; however, we can use automation to find potential UX problems, or undesirable effects, even in rich, complex applications. I, and others, am working to find ways to use automation to discover these various types of potential problems. Here’s an overview of some of the points I have made. I intend to extend and expand on my work in future posts.

In my experience, heuristic techniques are useful in helping identify potential issues. Various people have managed to create test automation that essentially automates different types of heuristics.

EXAMPLES OF PUSHING THE BOUNDARIES

  • Dynamic Usability / Accessibility Testing—See the following article I wrote that describes some of my work in the area. The code is available here. You’re welcome to use and experiment with it.
  • Fighting Layout Bugs—This is by Michael Tamm. He described the work in a public “tech talk” at Google’s Test Automation Conference (GTAC) in 2009. The link is available on his project’s homepage.
  • Crawljax—This is another open-source project which applies actions. It uses patterns to determine when to apply the actions. I’ve seen it used for significant, global web applications. There is a video online which describes some of that work.
  • BiDi Checker—This software helps to identify problems related to bi-directional content on web sites and web applications. It successfully finds and identifies a wide range of potential issues.

You might notice that all the examples I’ve provided are available as free open-source software (FOSS). I’ve learnt to value open source because it reduces the cost of experimentation and allows us to extend and modify the code, e.g. to add new heuristics relatively easily. (You still need to be able to write code, however the code is freely and immediately available.)

AUTOMATION IS (OFTEN) NECESSARY, BUT NOT SUFFICIENT

Automation and automated tests can be beguiling, and paradoxically increase the chances of missing critical problems if we chose to rely mainly, or even solely, on the automated tests. Even with state of the art (the best we can do across the industry) automated tests, I still believe we need to ask additional questions about the software being tested. Sadly, in my experience, most automated tests are poorly designed and implemented, which increases the likelihood of problems eluding the automated tests.

Here are two articles which describe some key concerns:

The first describes how people can be biased into over-reliance on automation. It is called “Beware of Automation Bias by M.L. Cummings, 2004. The article is available online.

The second helped me understand where testing helps us work out which questions to ask (of the software) and that we need to use a process to identify the relevant questions. The article is called “5 Orders of Ignorance,” by Phillip G Armour, CACM 2000.

 

Julian Harty

Julian has been working in technology since 1980 and over the years has held an eclectic collection
of roles and responsibilities: He was the first software test engineer at Google in Europe, the Tester at Large for eBay group, and has consulted and helped lots of companies and projects globally. He’s also been a company director for a mix of companies and startups. Currently, Julian combines commercial work, parttime Ph.D. studies, and helping with improving education, teaching and learning using low-cost mobile devices particularly for disadvantaged schools globally. He has authored several books, most recently the Mobile Analytics Playbook which can be downloaded for free at: http://www.themobileanalyticsplaybook.com/. You can find lots of his work, including opensource projects, online.

Julian Harty
Julian has been working in technology since 1980 and over the years has held an eclectic collection of roles and responsibilities: He was the first software test engineer at Google in Europe, the Tester at Large for eBay group, and has consulted and helped lots of companies and projects globally. He’s also been a company director for a mix of companies and startups. Currently, Julian combines commercial work, parttime Ph.D. studies, and helping with improving education, teaching and learning using low-cost mobile devices particularly for disadvantaged schools globally. He has authored several books, most recently the Mobile Analytics Playbook which can be downloaded for free at: http://www.themobileanalyticsplaybook.com/. You can find lots of his work, including opensource projects, online.

The Related Post

An Overview of Four Methods for Systematic Test Design Strategy Many people test, but few people use the well-known black-box and white-box test design techniques. The technique most used, however, seems to be testing randomly chosen valid values, followed by error guessing, exploratory testing and the like. Could it be that the more systematic test ...
Two dominant manual testing approaches to the software testing game are scripted and exploratory testing. In the test automation space, we have other approaches. I look at three main contexts for test automation: 1. Code context – e.g. unit testing. 2. System context – e.g. protocol or message level testing. 3. Social context – e.g. ...
This is part 2 of a 2-part article series; part 1 was featured in the September 2020 issue of the LogiGear Magazine, and you can check it out here. Part 1 discussed the mindset required for Agile, as well as explored the various quadrants of the Agile Testing Quadrants model. Part 2 will delve into ...
LogiGear Magazine – April 2013 – Test Automation
The growing complexity of the Human-Machine Interface (HMI) in cars offers traditional testers an opportunity to capitalize on their strengths. The human-machine interface (HMI) is nothing new. Any user interface including a graphical user interface (GUI) falls under the category of human-machine interface. HMI is more commonly being used to mean a view into the ...
LogiGear Magazine – April 2014 – Test Tool and Automation
What is Ethereum Smart Contract Testing? What are its challenges? If you’re new to Smart Contract Testing, this in-depth guide will prepare you on how to test smart contracts successfully. Blockchain stands out due to its enormous implications. Everyone has heard of it, but few people know what the ramifications are for testers or how ...
Test execution and utility tools that can make your job easier My first exposure to the necessity for testers to have an array of tools was from the groundbreaking article “Scripts on my Toolbelt” by Danny Faught. Danny laid out the ideal approach to any testing job, and it got me thinking “How can I ...
Automated Testing is a huge part of DevOps, but without human-performed quality assurance testing, you’re increasing the risk of  lower-quality software making it into production.  Automated Testing is an essential DevOps practice to increase organizations’ release cadence and code quality. But there are definitely limits to only using Automated Testing. Without human quality assurance (QA) ...
One of the basic challenges with test automation is adoption. I can’t tell you how many times I’ve cataloged licenses for a company and found out they already have many different automation software packages, none of which is being used. Traditionally I’ve been told that is because the tools don’t work and that the teams ...
When Netflix decided to enter the Android ecosystem, we faced a daunting set of challenges: 1. We wanted to release rapidly (every 6-8 weeks). 2. There were hundreds of Android devices of different shapes, versions, capacities, and specifications which need to playback audio and video. 3. We wanted to keep the team small and happy. ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe