Test Automation Is Not Automatic

Recently while teaching a workshop on Testing Dirty Systems, I uttered this “Randyism” off the top of my head, “Test automation is not automatic.” I realized immediately that I had just concisely stated the problem in making test automation a reality in many organizations.

Most testers know that test automation is not automatic. (Wouldn’t it be great?) However, management many times does not know or accept that reality.

There are some test tools, such as unit test tools, that are practically automatically applied. My remarks in this article are aimed at the capture/playback and scripting tools for test automation.

The issues are that:

  • Not every test can or should be automated.
  • For those tests that can be automated, it takes time and effort to build the automation.
  • For those tests that have been automated, the tests must be maintained.
  • It takes time to lean how to use a tool.
  • It takes effort and planning to implement a test automation framework.

None of these are automatic, even with the best of tools.

Not every test can or should be automated

Think about the things you test that are not very repeatable. Or, they may be prone to constant change. Perhaps the things you test are developed in a technology that has little or no tool support.

Then, there are tests such as user acceptance tests that need people’s evaluation to judge acceptance.

Some tests require creativity and adaptation to perform. You may have to make judgments during the test, which may be too complex to describe in a script. Test automation leverages the mundane testing to give more time and attention to the unique tests.

Your job is to identify the tests that can be automated. (They don’t come labeled!) Then, you must understand the nature of the test, such as the pre-requisites, the steps to perform it, the exceptions, and where to find the expected results. None of this is automatic. It’s all test design and test implementation.

For those tests that can be automated, it takes time and effort to build the automation

It’s one thing to automate a function, but another to design a good test of the function. That’s why capture/playback is so appealing, yet lacking. The issues are: What are you testing in the capture session? Then, how can you extend those tests to add value?

You have to apply approaches like data-driven testing and keyword-driven testing, which take time and effort to understand and implement. These are not “out of the box” deliverables.

For those tests that have been automated, the tests must be maintained

This has been one of the consistent issues in test automation. There are things you can do to ease the maintenance burden, but it doesn’t do away with the issue.

For example, modular and reusable test scripts are very helpful in reducing the number of tests that must be maintained. Still, you must maintain the scripts you have.

This means you must know when the application has changed, what the changes were, and create new tests for those changes. This implies the presence of configuration management and traceability.

It takes time to learn how to use a tool

Of course, the learning curve varies by tool, but the fact remains that a tool with sophisticated features will take some time to learn. The learning curve also varies by person. Some people with deep experience in automation may be able to learn very fast, but when you consider the wider deployment, most people will fall on the lower end of the experience scale.

The time also varies by the approach used to get training. Some people try self-learning which is noble, but typically takes longer than classroom training. Mentoring from an experienced person may be the best approach.

You must assess your organization’s skills and abilities to know if mentoring will help. The best mentor in the world can’t help the person who isn’t ready to learn. Is that another “Randyism?”

It takes effort, money and planning to implement a test automation framework

The framework can take a variety of forms, but the context here is the organizational framework which provides a way to efficiently build and control test automation. Tools are only one-third of the picture. You also need processes with trained and motivated people to make everything work together.

Out of the box, tools are just software. A process framework helps with reuse and the maintenance of test automation.

Frameworks aren’t automatic, either. They must be adapted to fit each situation, therefore they require time, effort and funding to design and implement.

Conclusion

This article is certainly not an in-depth treatment of this topic. Entire books and training courses have been written to address these and other aspects of test automation.

I hope this expands on my simple statement “Test automation is not automatic” to provoke your own thinking on this reality.

For over twenty years now, I have seen a wide variety of test tool companies promote their tools as being effort-free, script-less, or however they choose to position their products. The evidence shows these are often empty claims. Just compare the numbers of people who have been successful in using test automation tools to those who have given up using the tools. It’s about 25% successful to 75% unsuccessful in my research.

I hope this article is an encouragement to those who have tried to implement test automation as well as to those who haven’t. It’s important to go into these projects with your eyes open and expectations at a realistic level. Once you get past the idea that the tools do all the work, you can do the planning and other work needed to increase your chances of success.

Randall Rice
Randy Rice is a thought leading author, speaker and practitioner consultant in the field of software testing and software quality. Rice has worked with organizations worldwide to improve the quality of their information systems and optimize their testing processes.

The Related Post

I’ve been teaching a lot lately, was in India for one week, and I’m off to Seattle in two weeks to teach on performance topics. I thoroughly enjoy teaching, it allows me to stay sharp with current trends, and provides a nice break from the “implementation focus” that I generally have day to day.
For this interview, we talked to Greg Wester, Senior Member Technical Staff, Craig Jennings, Senior Director, Quality Engineering and Ritu Ganguly, QE Director at Salesforce. Salesforce.com is a cloud-based enterprise software company specializing in software as a service (SaaS). Best known for its Customer Relationship Management (CRM) product, it was ranked number 27 in Fortune’s 100 ...
From automotive Software Testing standards, testing techniques, and process, this article is an in-depth guide for those looking to transfer their existing skills to this exciting industry. For the Software Car, autonomous driving gets most of the hype, but most overlook the fact that there is so much more to Software Testing for the automotive ...
When automated tests are well-organized and written with the necessary detail, they can be very efficient and maintainable. But designing automated tests that deal with data can be challenging if you have a lot of data combinations. For example, let’s say we want to simulate a series of 20 customers, along with the number of ...
Learn how to leverage TestArchitect and Selenium for turnkey, Automated Web testing. TestArchitect lets you create, manage, and run web-based automated tests on different types of browsers—using either a WebDriver or non-WebDriver technique. In this article, we will explore employing WebDriver for testing a web-based application with TestArchitect. TestArchitect with WebDriver is a tool for automating ...
Introduction Many executives have some very basic questions about Software Testing. These questions address the elements of quality (customer satisfaction) and money (spending the least amount of money to prevent future loss). The basic questions that executive have about Software Testing include: Why care about and spend money on testing? Why should testing be treated ...
What is the Automation ROI ticker? The LogiGear Automation Return on Investment (ROI) ticker, the set of colored numbers that you see above the page, shows how much money we presumably save our customers over time by employing test automation as compared to doing those same tests manually, both at the design and execution level.
This article was developed from concepts in the book Global Software Test Automation: Discussion of Software Testing for Executives. Introduction There are many potential pitfalls to Manual Software Testing, including: Manual Testing is slow and costly. Manual tests do not scale well. Manual Testing is not consistent or repeatable. Lack of training. Testing is difficult ...
Based in Alberta, Canada, Jonathan Kohl takes time out of his busy schedule to discuss his views on software testing and automation.
We’re excited to share with you the latest and greatest features of TestArchitect Gondola, as well as how to use them. So, check them out below! Gondola Studio UI/UX ImprovementsGondola Studio’s new Test Execution Dialog makes it easy to configure and run your test. You can choose the browser or device you’d like to run ...
Introduction As a consultant and trainer, I am often asked by my clients and students how to deal with automated acceptance tests. One common question is whether automated acceptance tests should use the graphical user interface (GUI) provided by the application.
LogiGear Magazine September Issue 2020: Testing Transformations: Modernizing QA in the SDLC

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe