Is Action Based Testing an Automation Technique?

Introduction

Keyword-driven methodologies like Action Based Testing (ABT) are usually considered to be an Automation technique. They are commonly positioned as an advanced and practical alternative to other techniques like to “record & playback” or “scripting”.

To see ABT as an Automation technique is not incorrect. We do it ourselves at LogiGear in marketing. Some parts of ABT, and the supporting toolset TestArchitect, are very technical and would not fit in a manual process. However, in this article I want to show that the core ideas of ABT are not specifically about automation at all, they are merely a style of test design.

ABT as a Style for Writing a Test

Consider, for example, this small manual test instruction regarding a registration dialog (see Example 1). I found this instruction in one of our earlier projects:

Enter a user id that is greater than 10 characters in the user id field. Then enter proper information for all the other fields on the screen and click the ‘Continue’ button.

The following error message should be displayed below the screen: ‘A User Id must be less than 10 characters’.

Example 1 – Manual Test Instruction

At first glance, there does not appear to be a lot wrong with this instruction. It is a proper and very common instruction in a manual test suite. However, when examined more closely a few things can be noted:

  1. First, the input values mentioned in the instruction are “implicit”; they are circumscribed rather than explicitly specified. The text calls for “a user id that is greater than 10 characters” and “proper information”. This is not a big deal in most cases, but it is not always efficient. It calls for the tester to supply the actual values when the test case is executed, meaning:
    • The tester has to spend time and effort every time the test case is executed, while if the values had been specified as part of the test design this would have happened only once.
    • This test case execution effort happens typically near the end of a project, when not much time is left for the tester to be expending “extra” effort.
  2. Furthermore, an instruction like “click the ‘Continue’ button” is likely to be repeated for every test case related to this dialog. In this example that is not much of a problem, but in many test case descriptions I have seen instructions can be very detailed and extensive, and are repeated over and over again in each test case. This means:
    • A lot of work for the test designer to create these instructions and make sure they are correct.
    • The instructions are quickly rendered outdated if anything changes in the system under test. This means that the instructions either have to be modified, or, in many cases, they are simply left alone, being incorrect. After a number of maintenance cycles the test cases become obsolete, thus losing a valuable investment.

In general manual instructions tend to be verbose, voluminous, and hard to create and maintain. Executing manual instruction costs a lot of time and labor, often when a project is already under time pressure.

Using ABT the same test would be written in a spreadsheet, and typically look like this:

action user id message
check registration aaabbbcccdd User Id must be less than 10 characters

Example 2 – ABT Test Instruction

Compared to the earlier version (Example 1) a number of differences can be seen:

  • The format of the instruction is much shorter, taking less time for the test designer to create.
  • The input value for user id is now specified explicitly. The tester no longer needs to come up with one.
  • Input values not relevant to the test (the “other fields”) are left to the action to use default values. This focuses on what matters for the test which increases readability and improves maintainability.
  • Other details, like the need to click on the “continue” button or where to find the error message, are also hidden in the action and equally guarded against changes in the system under test.

Notice that the action-based format used here does not assume Automation at all. It is just another way of writing a test. The test case can be executed equally well manually as it can be with Automation. An experienced tester, who is familiar with operating the system under test, can follow this instruction and execute the test case manually.

However, two considerations remain for use of the short action-based format for manual tests:

  1. The executing manual tester must be experienced, and experienced with the system under test. This is not always a valid assumption.
  2. Since they are not specified, the manual tester will still have to supply values for the other fields in the dialog during test execution.

The way around these limitations is to let the test designer spend time to define the “check registration” action in more detail, just like would be done when automating this test. And just like with Automation, this specification, which we call “action definition”, will only have to be done once, thus also improving maintainability. For a manual test this definition will be specified as template text, with place holders for the arguments for which the manual tester will need to supply the actual values. In TestArchitect we even have added a function that will do this automatically – it will generate a document in which all actions are re-placed with the template texts specified in their definitions, and the place holders replaced by the actual argument values.

Conclusion

What I hope I have illustrated in this article is that Action Based Testing, and keyword based methods in general, are not necessarily Automation techniques. They are most of all an economic way of describing tests. They are a style or a format for writing a test instruction.

Hans Buwalda

Hans leads LogiGear’s research and development of test automation solutions, and the delivery of advanced test automation consulting and engineering services. He is a pioneer of the keyword approach for software testing organizations, and he assists clients in strategic implementation of the Action Based Testing™ method throughout their testing organizations.

Hans is also the original architect of LogiGear’s TestArchitect™, the modular keyword-driven toolset for software test design, automation and management. Hans is an internationally recognized expert on test automation, test development and testing technology management. He is coauthor of Integrated Test Design and Automation (Addison Wesley, 2001), and speaks frequently at international testing conferences.

Hans holds a Master of Science in Computer Science from Free University, Amsterdam.

Hans Buwalda
Hans Buwalda, CTO of LogiGear, is a pioneer of the Action Based and Soap Opera methodologies of testing and automation, and lead developer of TestArchitect, LogiGear’s keyword-based toolset for software test design, automation and management. He is co-author of Integrated Test Design and Automation, and a frequent speaker at test conferences.

The Related Post

Dr. Cem Kaner – Director, Center for Software Testing Education & Research, Florida Institute of Technology PC World Vietnam: What did you think of VISTACON 2010? Dr. Kaner: I am very impressed that the event was very professionally organized and happy to meet my old colleagues to share and exchange more about our area of ...
Plan your Test Cases with these Seven Simple Steps What is a mind map? A mind map is a diagram used to visually organize information. It can be called a visual thinking tool. A mind map allows complex information to be presented in a simplified visual format. A mind map is created around a single ...
Alexa Voice Service (AVS): Amazon’s service offering for a voice-controlled AI assistant. Offered in different products. Source: https://whatis.techtarget.com/definition/Alexa-Voice-Services-AVS Autopilot Short for “automatic pilot,” a device for keeping an aircraft on a set course without the intervention of the pilot. Source: https://en.oxforddictionaries.com/definition/us/automatic_pilot Blockchain Infrastructure: A complex, decentralized architecture that orchestrates many systems running asynchronously over the ...
One of the most common challenges faced by business leaders is the lack of visibility into QA activities. QA leaders have a tough time communicating the impact, value, and ROI of testing to the executives in a way that they can understand. Traditional reporting practices often fail to paint the full picture and do not ...
Please note: This article was adapted from a blog posting in Karen N. Johnson’s blog on July 24, 2007. Introduction The password field is one data entry field that needs special attention when testing an application. The password field can be important (since accessing someone’s account can start a security leak), testers should spend more ...
With this edition of LogiGear Magazine, we introduce a new feature, Mind Map. A mind map is a diagram, usually devoted to a single concept, used to visually organize related information, often in a hierarchical or interconnected, web-like fashion. This edition’s mind map, created by Sudhamshu Rao, focuses on tools that are available to help ...
PWAs have the ability to transform the way people experience the web. There are a few things we can agree we have seen happen. The first being that we figured out the digital market from an application type perspective. Secondly, we have seen the rise of mobile, and lastly, the incredible transformation of web to ...
This article first appeared in BETTER SOFTWARE, May/June 2005. Executives and managers, get your performance testing teams out of the pit and ahead of the pack Introduction As an activity, performance testing is widely misunderstood, particularly by executives and managers. This misunderstanding can cause a variety of difficulties-including outright project failure. This article details the ...
Has this ever happened to you: You’ve been testing for a while, perhaps building off of a branch, only to find out that, after all of this time, there is something big wrong. It’s a bad build and now you have to go backwards, fix something, and get a new build. Basically, you just wasted ...
Let’s look at a few distinctions between the two process improvement practices that make all the difference in their usefulness for making projects and job situations better! An extreme way to look at the goals of these practices is: what makes your work easier (retrospective) versus what did someone else decide is best practice (post-mortem)? ...
Think you’re up for a challenge? Print this word search out! See if you can find all the words and learn a few new software testing terms in the process. To see how you’ve done, check your answers in the answer key below. *You can check the answer key here.
As I write this article I am sitting at a table at StarEast, one of the major testing conferences. As you can expect from a testing conference, a lot of talk and discussion is about bugs and how to find them. What I have noticed in some of these discussions, however, is a lack of ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe