Is Action Based Testing an Automation Technique?

Introduction

Keyword-driven methodologies like Action Based Testing (ABT) are usually considered to be an Automation technique. They are commonly positioned as an advanced and practical alternative to other techniques like to “record & playback” or “scripting”.

To see ABT as an Automation technique is not incorrect. We do it ourselves at LogiGear in marketing. Some parts of ABT, and the supporting toolset TestArchitect, are very technical and would not fit in a manual process. However, in this article I want to show that the core ideas of ABT are not specifically about automation at all, they are merely a style of test design.

ABT as a Style for Writing a Test

Consider, for example, this small manual test instruction regarding a registration dialog (see Example 1). I found this instruction in one of our earlier projects:

Enter a user id that is greater than 10 characters in the user id field. Then enter proper information for all the other fields on the screen and click the ‘Continue’ button.

The following error message should be displayed below the screen: ‘A User Id must be less than 10 characters’.

Example 1 – Manual Test Instruction

At first glance, there does not appear to be a lot wrong with this instruction. It is a proper and very common instruction in a manual test suite. However, when examined more closely a few things can be noted:

  1. First, the input values mentioned in the instruction are “implicit”; they are circumscribed rather than explicitly specified. The text calls for “a user id that is greater than 10 characters” and “proper information”. This is not a big deal in most cases, but it is not always efficient. It calls for the tester to supply the actual values when the test case is executed, meaning:
    • The tester has to spend time and effort every time the test case is executed, while if the values had been specified as part of the test design this would have happened only once.
    • This test case execution effort happens typically near the end of a project, when not much time is left for the tester to be expending “extra” effort.
  2. Furthermore, an instruction like “click the ‘Continue’ button” is likely to be repeated for every test case related to this dialog. In this example that is not much of a problem, but in many test case descriptions I have seen instructions can be very detailed and extensive, and are repeated over and over again in each test case. This means:
    • A lot of work for the test designer to create these instructions and make sure they are correct.
    • The instructions are quickly rendered outdated if anything changes in the system under test. This means that the instructions either have to be modified, or, in many cases, they are simply left alone, being incorrect. After a number of maintenance cycles the test cases become obsolete, thus losing a valuable investment.

In general manual instructions tend to be verbose, voluminous, and hard to create and maintain. Executing manual instruction costs a lot of time and labor, often when a project is already under time pressure.

Using ABT the same test would be written in a spreadsheet, and typically look like this:

action user id message
check registration aaabbbcccdd User Id must be less than 10 characters

Example 2 – ABT Test Instruction

Compared to the earlier version (Example 1) a number of differences can be seen:

  • The format of the instruction is much shorter, taking less time for the test designer to create.
  • The input value for user id is now specified explicitly. The tester no longer needs to come up with one.
  • Input values not relevant to the test (the “other fields”) are left to the action to use default values. This focuses on what matters for the test which increases readability and improves maintainability.
  • Other details, like the need to click on the “continue” button or where to find the error message, are also hidden in the action and equally guarded against changes in the system under test.

Notice that the action-based format used here does not assume Automation at all. It is just another way of writing a test. The test case can be executed equally well manually as it can be with Automation. An experienced tester, who is familiar with operating the system under test, can follow this instruction and execute the test case manually.

However, two considerations remain for use of the short action-based format for manual tests:

  1. The executing manual tester must be experienced, and experienced with the system under test. This is not always a valid assumption.
  2. Since they are not specified, the manual tester will still have to supply values for the other fields in the dialog during test execution.

The way around these limitations is to let the test designer spend time to define the “check registration” action in more detail, just like would be done when automating this test. And just like with Automation, this specification, which we call “action definition”, will only have to be done once, thus also improving maintainability. For a manual test this definition will be specified as template text, with place holders for the arguments for which the manual tester will need to supply the actual values. In TestArchitect we even have added a function that will do this automatically – it will generate a document in which all actions are re-placed with the template texts specified in their definitions, and the place holders replaced by the actual argument values.

Conclusion

What I hope I have illustrated in this article is that Action Based Testing, and keyword based methods in general, are not necessarily Automation techniques. They are most of all an economic way of describing tests. They are a style or a format for writing a test instruction.

Hans Buwalda

Hans leads LogiGear’s research and development of test automation solutions, and the delivery of advanced test automation consulting and engineering services. He is a pioneer of the keyword approach for software testing organizations, and he assists clients in strategic implementation of the Action Based Testing™ method throughout their testing organizations.

Hans is also the original architect of LogiGear’s TestArchitect™, the modular keyword-driven toolset for software test design, automation and management. Hans is an internationally recognized expert on test automation, test development and testing technology management. He is coauthor of Integrated Test Design and Automation (Addison Wesley, 2001), and speaks frequently at international testing conferences.

Hans holds a Master of Science in Computer Science from Free University, Amsterdam.

Hans Buwalda
Hans Buwalda, CTO of LogiGear, is a pioneer of the Action Based and Soap Opera methodologies of testing and automation, and lead developer of TestArchitect, LogiGear’s keyword-based toolset for software test design, automation and management. He is co-author of Integrated Test Design and Automation, and a frequent speaker at test conferences.

The Related Post

Introduction All too often, senior management judges Software Testing success through the lens of potential cost savings. Test Automation and outsourcing are looked at as simple methods to reduce the costs of Software Testing; but, the sad truth is that simply automating or offshoring for the sake of automating or offshoring will only yield poor ...
One of the most dreaded kinds of bugs are the ones caused by fixes of other bugs or by code changes due to feature requests. I like to call these the ‘bonus bugs,’ since they come on top on the bug load you already have to deal with. Bonus bugs are the major rationale for ...
Regardless of the method you choose, simply spending some time thinking about good test design before writing the first test case will have a very high payback down the line, both in the quality and the efficiency of the tests. Test design is the single biggest contributor to success in software testing and its also ...
The 12 Do’s and Don’ts of Test Automation When I started my career as a Software Tester a decade ago, Test Automation was viewed with some skepticism.
There are many ways to approach test design. These approaches range from checklists to very precise algorithms in which test conditions are combined to achieve the most efficiency in testing. There are situations, such as in testing mobile applications, complex systems and cyber security, where tests need to be creative, cover a lot of functionality, ...
One of the most common challenges faced by business leaders is the lack of visibility into QA activities. QA leaders have a tough time communicating the impact, value, and ROI of testing to the executives in a way that they can understand. Traditional reporting practices often fail to paint the full picture and do not ...
Introduction This 2 article series describes activities that are central to successfully integrating application performance testing into an Agile process. The activities described here specifically target performance specialists who are new to the practice of fully integrating performance testing into an Agile or other iteratively-based process, though many of the concepts and considerations can be ...
PWAs have the ability to transform the way people experience the web. There are a few things we can agree we have seen happen. The first being that we figured out the digital market from an application type perspective. Secondly, we have seen the rise of mobile, and lastly, the incredible transformation of web to ...
Let’s look at a few distinctions between the two process improvement practices that make all the difference in their usefulness for making projects and job situations better! An extreme way to look at the goals of these practices is: what makes your work easier (retrospective) versus what did someone else decide is best practice (post-mortem)? ...
LogiGear Magazine – May 2011 – The Test Process Improvement Issue
Having developed software for nearly fifteen years, I remember the dark days before testing was all the rage and the large number of bugs that had to be arduously found and fixed manually. The next step was nervously releasing the code without the safety net of a test bed and having no idea if one ...
Jeff Offutt – Professor of Software Engineering in the Volgenau School of Information Technology at George Mason University – homepage – and editor-in-chief of Wiley’s journal of Software Testing, Verification and Reliability, LogiGear: How did you get into software testing? What do you find interesting about it? Professor Offutt: When I started college I didn’t ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe