How To Decide What To Automate: Do’s And Don’ts

The 12 Do’s and Don’ts of Test Automation

When I started my career as a Software Tester a decade ago, Test Automation was viewed with some skepticism.

Automated tests were hard to set up, time-consuming to run, and provided unreliable results.

But since then, huge advances have been made, and now the choice not to automate tests would be seen as extremely foolhardy. I believe this is due to 2 changes: first, the move to test closer to the code using unit tests and services tests, and second, the availability of more reliable test tools.

However, this does not mean that everything should be automated. Having too many tests can be a detriment to good software development. Tests that take too long to run slow down the feedback that developers get when they commit their code, and therefore slow down the entire development process. And tests that are flaky or take too much time to fix result in a distrust of the tests, which means that they won’t be run at all or that their results will be ignored.  

Therefore, the first step in Automation success is knowing what tests to automate and what not to automate.

Here are some guidelines:

1. DO Automate Tasks as Close to the Code as Possible

Unit tests are so important, because they exercise code functionality without touching any dependency. If a developer makes a change that results in a hole in logic, that hole will be detected before the change makes it to the tester, saving everyone valuable time. A popular trend today is TDD, or Test-Driven Development. This is where the developer writes the unit tests before they write the code, ensuring that they begin by thinking about all the possible use cases for the software before solving the technical challenges of writing the code.

Services tests are also extremely valuable, because they test the functionality of the software without going through the UI. Many of today’s applications use APIs that make REST requests to the server. When going through the UI, there are often limitations to what a user can do, but when going directly through the API, more test cases can be executed. For example, a UI might limit the type of characters that can be entered in a form field, but there may not be server-side validation on the input, which could be discovered through API testing.

2. DO Automate Repetitive Tasks

Some tests are so important that they need to run repeatedly. A perfect example of this is the login test: if your users can’t log into the application, you have a real customer service problem! But no one wants to spend time manually logging into an application again and again. Automating your login tests ensures that authentication is tested with a wide variety of users and accounts, both valid and invalid.

3. DO Automate Things Users will do Every Day

What is the primary function of your software? What is the typical path that a user will take when using your software? These are the kinds of activities that should have automated tests. Rather than running through a manual user path every morning, you can set your automated test to do it, and you’ll be notified right away if there is a problem.  

4. DO Automate Basic Smoke-Level Tests

I like to think of smoke-level tests as tests of features that we would be really embarrassed by if they failed in the field. One company where I worked early in my career had a search feature that was broken for weeks, and no one noticed because we hadn’t run a test on it. Unfortunately, the bug was pushed out to production and seen by customers. Automating these tests and running them with every build can help catch major problems quickly.

5. DO Automate Things that will Save your Time

A coworker of mine was testing a feature, which needed a completely new account set up each time the test was run. Rather than set it up manually every time, he created Automation that would set up a new account for him, saving his valuable time.

6. DO Automate Things that will Allow you to Exercise Lots of Different Options

A test that fills out a form by filling in all of the available fields is not completely testing the form. What if there is one missing field? What if there are 2 missing fields? What if one of those fields is required? With Automation, you can exercise many different combinations of form submission in much less time than it would take to do manually.

7. DO Automate Things that will Alert you when Something is Wrong

I have several negative tests in my API test suites that verify that a user can’t do something when they don’t have permission to do it. Recently some of those tests failed, alerting me to the fact that someone had changed the permission structure, and now a user was able to view content they shouldn’t.

8. DON’T Automate Tests that you Know will be Flaky

If you can’t come up with a way to run an automated test on a feature and have it pass consistently, you may want to run that test manually or find a different way to assert your results. When I was first getting started with Automation, I wanted to test that a feature sent an email and that the email was received. I discovered that email clients are tricky to test in the UI, because there’s no way of knowing how long it will take before the email is delivered. Instead, I verified that the email provider had sent the email, and occasionally did a manual check of the inbox to make sure that the email arrived.

9. DON’T Automate Tests for Features that are in the Early Stages and are Expected to go Through Many Changes

It’s great to write unit tests for new code, which as mentioned above is usually done by the developer. And automated services tests can be created before there is a UI for a new feature. But if you know that your API endpoints or your UI will be changing quite a bit as the story progresses, you may want to hold off on services or UI Automation until things have settled down a bit.  For the moment, manual testing will be your best strategy.

10. DON’T Automate Tests for Features that no one Cares About

Your application probably runs on a wide variety of browsers, and your inclination may be to run your tests on all of them. But it could be that only 1% of users are running your application on a certain browser. If that’s the case, why go through the stress of trying to run your tests on this browser?  Similarly, if there is a feature in your application that will be deprecated soon and only 1% of your users are using it, your time would be better spent automating another feature.

11. DON’T Automate Weird Edge Cases

There will always be bugs in software, but some will be more likely to be seen by users than others. You may be fascinated by the bug that is caused by going to a specific sequence of pages, entering non-UTF-8 characters, and then clicking the Back button 3 times in a row, but since it’s very unlikely that an end user will do this, it’s not worth your time to design an automated test for it.

12. DON’T Automate Bugs you are Sure will Never be Seen Again

I once worked with someone who felt that every bug found needed a corresponding test. This is not always the case. Some bugs are merely cosmetic and are unlikely to appear again. A good example of this is the typo. If a developer accidentally entered text that said “Contcat us” instead of “Contact us,” that was simply an oversight. No developer would ever go into the code and revert to the earlier misspelling, so there’s no need to automate a test that verifies that text.

Summary

Automated tests, when done well, provide fast feedback for developers, alert testers to problems well before they reach production, and free up testers to do more exploratory testing. But when Automation is done poorly, it results in tests that are not trusted and wasted time for everyone.

Kristin Jackvony
Kristin Jackvony discovered her passion for software testing after working as a music educator for nearly 2 decades. She has been a QA engineer, manager, and lead for the last eleven years and is currently the Principal Engineer for Quality at Paylocity. Her weekly blog, Think Like a Tester, helps software testers focus on the fundamentals of testing.

The Related Post

Do testers have to write code? For years, whenever someone asked me if I thought testers had to know how to write code, I’ve responded: “Of course not.” The way I see it, test automation is inherently a programming activity. Anyone tasked with automating tests should know how to program. But not all testers are ...
“Happy About Global Software Test Automation: A Discussion of Software Testing for Executives” Author: Hung Q. Nguyen, Michael Hackett, and Brent K. Whitlock Publisher: Happy About (August 1, 2006) Finally, a testing book for executives!, November 17, 2006 By Scott Barber “Chief Technologist, PerfTestPlus” Happy About Global Software Test Automation: A Discussion of Software Testing ...
Two dominant manual testing approaches to the software testing game are scripted and exploratory testing. In the test automation space, we have other approaches. I look at three main contexts for test automation: 1. Code context – e.g. unit testing. 2. System context – e.g. protocol or message level testing. 3. Social context – e.g. ...
Introduction In many of the Test Automation projects that we are involved with using our Action-Based Testing methodology, management has expressed a need to relate tests and test results to system requirements. The underlying thought is that automation will create extra possibilities to control the level of compliance to requirements of the system under test. ...
Mobile testers need to take a different approach when it comes to Test Automation.
For this interview, we talked to Greg Wester, Senior Member Technical Staff, Craig Jennings, Senior Director, Quality Engineering and Ritu Ganguly, QE Director at Salesforce. Salesforce.com is a cloud-based enterprise software company specializing in software as a service (SaaS). Best known for its Customer Relationship Management (CRM) product, it was ranked number 27 in Fortune’s 100 ...
First, let me ask you a few questions. Are your bugs often rejected? Are your bugs often assigned back to you and discussed back and forth to clarify information? Do your leaders or managers often complain about your bugs?
LogiGear Magazine March Issue 2018: Under Construction: Test Methods & Strategy
With this edition of LogiGear Magazine, we introduce a new feature, Mind Map. A mind map is a diagram, usually devoted to a single concept, used to visually organize related information, often in a hierarchical or interconnected, web-like fashion. This edition’s mind map, created by Sudhamshu Rao, focuses on tools that are available to help ...
The path to continuous delivery leads through automation Software testing and verification needs a careful and diligent process of impersonating an end user, trying various usages and input scenarios, comparing and asserting expected behaviours. Directly, the words “careful and diligent” invoke the idea of letting a computer program do the job. Automating certain programmable aspects ...
This is an adaptation of a presentation entitled Software Testing 3.0 given by Hung Nguyen, LogiGear CEO, President, and Founder. The presentation was given as the keynote at the Spring 2007 STPCON conference in San Mateo, California. Watch for an upcoming whitepaper based on this topic. Introduction This first article of this two article series ...
Let’s look at a few distinctions between the two process improvement practices that make all the difference in their usefulness for making projects and job situations better! An extreme way to look at the goals of these practices is: what makes your work easier (retrospective) versus what did someone else decide is best practice (post-mortem)? ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe