Is Test Automation the Same as Programming Tests?

Introduction

A common issue that I come across in projects is the relationship between test automation and programming. In this article I want to highlight some of the differences that I feel exist between the two.

After many decades of often slow and hard fought over progress, software engineering is now a well established profession, with recognized methods and practices. Automated testing, however, is still a relatively new and often misunderstood phenomenon. Often software engineers get involved in software testing and try to apply their programming knowledge and experience. For many aspects of software testing this is a good thing, but over the years I have come to believe that automated testing poses its own distinct properties and challenges, and a practitioner should try to understand those and work with them.

This article will explore the differences between software engineering and designing automated tests.

The Role and Importance of Test Design

A first big item is the role and importance of test design. The “logic” of an automated test should be in the test, not in the software automating it. This notion is the core of LogiGear’s Action Based Testing (ABT) method, where test cases are created by testers in a format that is as friendly to them as possible, using action words in spreadsheets. In an earlier article I discussed whether ABT is an automation technique (see: Is Action Based Testing an Automation Technique?). In my view that is not the case. I view it primarily as a test design technique. This illustrates that the focus in automated testing is placed on the test design, not the automation technology.

The Role of Test Cases

A second aspect is the role of test cases, and how they are related. In a regular software system many functions, to which test cases could be compared, all work together and are interdependent. Changes in one part can have consequences for another. In test cases, such relationships should be avoided. A test case should have a specific scope that is well-differentiated from all other test cases. In an ABT test, it is common to let one test case leave behind a well-defined situation for the next one, but that is as far as relationships go. Test cases can best be seen as “satellites” hanging around the IT system, each probing a specific aspect of it, and independent from each other. The consequence is that test design effort focuses on functional aspects of the test cases needed, and should preferably stay away from the technical architecture for the automation. Test automation is a separate effort, focusing on the actions not the test cases.

Test Case Maintainability

Maybe one of the most obvious properties of automated tests is their maintainability, which hardly relates to their technical structure, but instead almost completely focuses on their sensitivity to changes in the system under test. A well designed automated test suite can still be regarded as weak if small changes to the system under test have a large impact on the tests. This property is probably the single most important aspect that gives test automation its own unique dynamic.

Test Case Readability

A further major criterion for me is readability. When looking at test cases I want to be able to understand them quickly and assess their effectiveness and completeness. Readability of test cases in itself also helps their maintainability. For me this includes for example explicit values, both input values and expected outcomes. In ABT these show up as the arguments of the action in the spreadsheet. In programming however, it is common practice not to “hard code” values, rightfully so, since it would jeopardize maintainability. In ABT we also have the possibility of variables to be used instead of hard values, but I encourage test designers to use them as little as possible. Only when a value is variable and is reused in multiple test cases should a variable be used. Examples are an IP address of a server to contact as part of a test, or a sales tax percentage in an order management system. For people with a software engineering background this is quite a hard notion to overcome and it often takes quite a bit of persuasion to get them to use explicit values.

Designing Test Cases is Not a Programming Challenge

There are more examples where one could consider a software background “harmful” for a designer of automated tests. A very noticeable one is if an engineer tackles a test case as a programming challenge, and thus comes up with a complex and constructed solution that might be smart, but does not help readability and obfuscates the intention and logic of the test case. A recent case I saw in a project was the use of a data table to test a number of links in web page. The test case looped through the links following the table to test for the expected link captions. The result maybe qualified as sophisticated programming but it was hard to understand what was going on. A much easier solution, in ABT, is to define an action “check link caption” and apply it for each link to be checked.

There Should be Little or No Debugging of Tests

Last but not least is the debugging. When I hear from a project that automated tests are being debugged, it immediately raises the question for me how well the test design was done. My criterion is simple: test values can be off, but the automation itself should always work. If it doesn’t lower level tests should have been created and run first to make sure all is well in the navigation in the system under test. Also all the actions should be verified apart from the test cases, the automation engineer who is responsible for the actions, should make his/her own test cases to test the actions before “releasing” them for use by the testers. The result should be that once a higher level, more functionally oriented, test is run, it works without problems. In our product we have now released a “debugger” to debug tests, but in fact I encourage everybody not to use it, but rather turn the eye on the test design the moment it turns out to be hard to run a test.

Conclusion

Automated testing and programming have a lot in common, and none of the differences described here are absolute, but I hope I was able to illustrate in this article that automation is a profession in itself with its own specifics and challenges. Understanding these can be an important contribution to automation success.

More Information on Action Based Testing

Hans Buwalda

Hans leads LogiGear’s research and development of test automation solutions, and the delivery of advanced test automation consulting and engineering services. He is a pioneer of the keyword approach for software testing organizations, and he assists clients in strategic implementation of the Action Based Testing™ method throughout their testing organizations.

Hans is also the original architect of LogiGear’s TestArchitect™, the modular keyword-driven toolset for software test design, automation and management. Hans is an internationally recognized expert on test automation, test development and testing technology management. He is coauthor of Integrated Test Design and Automation (Addison Wesley, 2001), and speaks frequently at international testing conferences.

Hans holds a Master of Science in Computer Science from Free University, Amsterdam.

Hans Buwalda
Hans Buwalda, CTO of LogiGear, is a pioneer of the Action Based and Soap Opera methodologies of testing and automation, and lead developer of TestArchitect, LogiGear’s keyword-based toolset for software test design, automation and management. He is co-author of Integrated Test Design and Automation, and a frequent speaker at test conferences.

The Related Post

This is part 2 of a 2-part article series; part 1 was featured in the September 2020 issue of the LogiGear Magazine, and you can check it out here. Part 1 discussed the mindset required for Agile, as well as explored the various quadrants of the Agile Testing Quadrants model. Part 2 will delve into ...
Introduction As a consultant and trainer, I am often asked by my clients and students how to deal with automated acceptance tests. One common question is whether automated acceptance tests should use the graphical user interface (GUI) provided by the application.
September Issue 2018: The Secrets to Better Test Automation  
How to do UI test automation with the fewest headaches I’m currently interviewing lots of teams that have implemented acceptance testing for my new book. A majority of those interviewed so far have at some point shot themselves in the foot with UI test automation. After speaking to several people who are about to do ...
When configured with a Python harness, TestArchitect can be used to automate testing on software for custom hardware Unlike other proprietary and open source tools, that are able to automate only desktop, or mobile, TestArchitect (TA Test) has the ability to test the software that runs on hardware in the following ways: 1. TA can ...
Automated Testing is a huge part of DevOps, but without human-performed quality assurance testing, you’re increasing the risk of  lower-quality software making it into production.  Automated Testing is an essential DevOps practice to increase organizations’ release cadence and code quality. But there are definitely limits to only using Automated Testing. Without human quality assurance (QA) ...
We’ve scoured the internet to search for videos that provide a wealth of knowledge about Test Automation. We curated this short-list of videos that cover everything from the basics, to the more advanced, and why Test Automation should be part of part of any software development organization. Automation Testing Tutorial for Beginners This tutorial introduces ...
People who know me and my work probably know my emphasis on good test design for successful test automation. I have written about this in “Key Success Factors for Keyword Driven Testing“. In the Action Based Testing (ABT) method that I have pioneered over the years it is an essential element for success. However, agreeing ...
How lagging automotive design principles adversely affect final products. Cars are integrating more and more software with every model year. The ginormous screen introduced by Tesla in their flagship Model S a few years ago was seemingly unrivaled at the time. Nowadays, screens of this size are not only commonplace in vehicles such as the ...
Cross-Browser Testing is an integral part of the Software Testing world today. When we need to test the functionality of a website or web application, we need to do so on multiple browsers for a multitude of reasons.
The guide for CUI Automated Testing strategies, including chatbot testing and voice app testing. In the Software Testing industry, trends come and go that shape the future of testing. From Automation in Agile, to the DevOps era we are now in, trends are what evolve and improve our testing processes and ideologies. Currently, many researchers ...
The Cloud demands that we be as nimble as possible, delivering features and fixes in almost real-time fashion. Both customer and provider rely on software development that can maintain quality while being light on its feet and constantly moving. In addition, Cloud-oriented systems tend to be highly complex and dynamic in structure — more than ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe