Is Test Automation the Same as Programming Tests?

Introduction

A common issue that I come across in projects is the relationship between test automation and programming. In this article I want to highlight some of the differences that I feel exist between the two.

After many decades of often slow and hard fought over progress, software engineering is now a well established profession, with recognized methods and practices. Automated testing, however, is still a relatively new and often misunderstood phenomenon. Often software engineers get involved in software testing and try to apply their programming knowledge and experience. For many aspects of software testing this is a good thing, but over the years I have come to believe that automated testing poses its own distinct properties and challenges, and a practitioner should try to understand those and work with them.

This article will explore the differences between software engineering and designing automated tests.

The Role and Importance of Test Design

A first big item is the role and importance of test design. The “logic” of an automated test should be in the test, not in the software automating it. This notion is the core of LogiGear’s Action Based Testing (ABT) method, where test cases are created by testers in a format that is as friendly to them as possible, using action words in spreadsheets. In an earlier article I discussed whether ABT is an automation technique (see: Is Action Based Testing an Automation Technique?). In my view that is not the case. I view it primarily as a test design technique. This illustrates that the focus in automated testing is placed on the test design, not the automation technology.

The Role of Test Cases

A second aspect is the role of test cases, and how they are related. In a regular software system many functions, to which test cases could be compared, all work together and are interdependent. Changes in one part can have consequences for another. In test cases, such relationships should be avoided. A test case should have a specific scope that is well-differentiated from all other test cases. In an ABT test, it is common to let one test case leave behind a well-defined situation for the next one, but that is as far as relationships go. Test cases can best be seen as “satellites” hanging around the IT system, each probing a specific aspect of it, and independent from each other. The consequence is that test design effort focuses on functional aspects of the test cases needed, and should preferably stay away from the technical architecture for the automation. Test automation is a separate effort, focusing on the actions not the test cases.

Test Case Maintainability

Maybe one of the most obvious properties of automated tests is their maintainability, which hardly relates to their technical structure, but instead almost completely focuses on their sensitivity to changes in the system under test. A well designed automated test suite can still be regarded as weak if small changes to the system under test have a large impact on the tests. This property is probably the single most important aspect that gives test automation its own unique dynamic.

Test Case Readability

A further major criterion for me is readability. When looking at test cases I want to be able to understand them quickly and assess their effectiveness and completeness. Readability of test cases in itself also helps their maintainability. For me this includes for example explicit values, both input values and expected outcomes. In ABT these show up as the arguments of the action in the spreadsheet. In programming however, it is common practice not to “hard code” values, rightfully so, since it would jeopardize maintainability. In ABT we also have the possibility of variables to be used instead of hard values, but I encourage test designers to use them as little as possible. Only when a value is variable and is reused in multiple test cases should a variable be used. Examples are an IP address of a server to contact as part of a test, or a sales tax percentage in an order management system. For people with a software engineering background this is quite a hard notion to overcome and it often takes quite a bit of persuasion to get them to use explicit values.

Designing Test Cases is Not a Programming Challenge

There are more examples where one could consider a software background “harmful” for a designer of automated tests. A very noticeable one is if an engineer tackles a test case as a programming challenge, and thus comes up with a complex and constructed solution that might be smart, but does not help readability and obfuscates the intention and logic of the test case. A recent case I saw in a project was the use of a data table to test a number of links in web page. The test case looped through the links following the table to test for the expected link captions. The result maybe qualified as sophisticated programming but it was hard to understand what was going on. A much easier solution, in ABT, is to define an action “check link caption” and apply it for each link to be checked.

There Should be Little or No Debugging of Tests

Last but not least is the debugging. When I hear from a project that automated tests are being debugged, it immediately raises the question for me how well the test design was done. My criterion is simple: test values can be off, but the automation itself should always work. If it doesn’t lower level tests should have been created and run first to make sure all is well in the navigation in the system under test. Also all the actions should be verified apart from the test cases, the automation engineer who is responsible for the actions, should make his/her own test cases to test the actions before “releasing” them for use by the testers. The result should be that once a higher level, more functionally oriented, test is run, it works without problems. In our product we have now released a “debugger” to debug tests, but in fact I encourage everybody not to use it, but rather turn the eye on the test design the moment it turns out to be hard to run a test.

Conclusion

Automated testing and programming have a lot in common, and none of the differences described here are absolute, but I hope I was able to illustrate in this article that automation is a profession in itself with its own specifics and challenges. Understanding these can be an important contribution to automation success.

More Information on Action Based Testing

Hans Buwalda

Hans leads LogiGear’s research and development of test automation solutions, and the delivery of advanced test automation consulting and engineering services. He is a pioneer of the keyword approach for software testing organizations, and he assists clients in strategic implementation of the Action Based Testing™ method throughout their testing organizations.

Hans is also the original architect of LogiGear’s TestArchitect™, the modular keyword-driven toolset for software test design, automation and management. Hans is an internationally recognized expert on test automation, test development and testing technology management. He is coauthor of Integrated Test Design and Automation (Addison Wesley, 2001), and speaks frequently at international testing conferences.

Hans holds a Master of Science in Computer Science from Free University, Amsterdam.

Hans Buwalda
Hans Buwalda, CTO of LogiGear, is a pioneer of the Action Based and Soap Opera methodologies of testing and automation, and lead developer of TestArchitect, LogiGear’s keyword-based toolset for software test design, automation and management. He is co-author of Integrated Test Design and Automation, and a frequent speaker at test conferences.

The Related Post

When configured with a Python harness, TestArchitect can be used to automate testing on software for custom hardware Unlike other proprietary and open source tools, that are able to automate only desktop, or mobile, TestArchitect (TA Test) has the ability to test the software that runs on hardware in the following ways: 1. TA can ...
Test execution and utility tools that can make your job easier My first exposure to the necessity for testers to have an array of tools was from the groundbreaking article “Scripts on my Toolbelt” by Danny Faught. Danny laid out the ideal approach to any testing job, and it got me thinking “How can I ...
Has this ever happened to you: You’ve been testing for a while, perhaps building off of a branch, only to find out that, after all of this time, there is something big wrong. It’s a bad build and now you have to go backwards, fix something, and get a new build. Basically, you just wasted ...
Introduction In many of the Test Automation projects that we are involved with using our Action-Based Testing methodology, management has expressed a need to relate tests and test results to system requirements. The underlying thought is that automation will create extra possibilities to control the level of compliance to requirements of the system under test. ...
LogiGear Magazine – April 2014 – Test Tool and Automation
The huge range of mobile devices used to browse the web now means testing a mobile website before delivery is critical.
What is Ethereum Smart Contract Testing? What are its challenges? If you’re new to Smart Contract Testing, this in-depth guide will prepare you on how to test smart contracts successfully. Blockchain stands out due to its enormous implications. Everyone has heard of it, but few people know what the ramifications are for testers or how ...
June Issue 2019: Testing the Software Car
I feel like I’ve spent most of my career learning how to write good automated tests in an Agile environment. When I downloaded JUnit in the year 2000 it didn’t take long before I was hooked – unit tests for everything in sight. That gratifying green bar is near-instant feedback that everything is going as ...
Two dominant manual testing approaches to the software testing game are scripted and exploratory testing. In the test automation space, we have other approaches. I look at three main contexts for test automation: 1. Code context – e.g. unit testing. 2. System context – e.g. protocol or message level testing. 3. Social context – e.g. ...
TestArchitect TM is the name we have given to our automation toolset. It reflects the vision that automated testing requires a well-designed architectural plan allowing technical and non-technical elements to work fluidly in their capacity. It also addresses the continual missing link of all test automation tools of how to design tests. In TestArchitect the test ...
The path to continuous delivery leads through automation Software testing and verification needs a careful and diligent process of impersonating an end user, trying various usages and input scenarios, comparing and asserting expected behaviours. Directly, the words “careful and diligent” invoke the idea of letting a computer program do the job. Automating certain programmable aspects ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe