How To Decide What To Automate: Do’s And Don’ts

The 12 Do’s and Don’ts of Test Automation

When I started my career as a Software Tester a decade ago, Test Automation was viewed with some skepticism.

Automated tests were hard to set up, time-consuming to run, and provided unreliable results.

But since then, huge advances have been made, and now the choice not to automate tests would be seen as extremely foolhardy. I believe this is due to 2 changes: first, the move to test closer to the code using unit tests and services tests, and second, the availability of more reliable test tools.

However, this does not mean that everything should be automated. Having too many tests can be a detriment to good software development. Tests that take too long to run slow down the feedback that developers get when they commit their code, and therefore slow down the entire development process. And tests that are flaky or take too much time to fix result in a distrust of the tests, which means that they won’t be run at all or that their results will be ignored.  

Therefore, the first step in Automation success is knowing what tests to automate and what not to automate.

Here are some guidelines:

1. DO Automate Tasks as Close to the Code as Possible

Unit tests are so important, because they exercise code functionality without touching any dependency. If a developer makes a change that results in a hole in logic, that hole will be detected before the change makes it to the tester, saving everyone valuable time. A popular trend today is TDD, or Test-Driven Development. This is where the developer writes the unit tests before they write the code, ensuring that they begin by thinking about all the possible use cases for the software before solving the technical challenges of writing the code.

Services tests are also extremely valuable, because they test the functionality of the software without going through the UI. Many of today’s applications use APIs that make REST requests to the server. When going through the UI, there are often limitations to what a user can do, but when going directly through the API, more test cases can be executed. For example, a UI might limit the type of characters that can be entered in a form field, but there may not be server-side validation on the input, which could be discovered through API testing.

2. DO Automate Repetitive Tasks

Some tests are so important that they need to run repeatedly. A perfect example of this is the login test: if your users can’t log into the application, you have a real customer service problem! But no one wants to spend time manually logging into an application again and again. Automating your login tests ensures that authentication is tested with a wide variety of users and accounts, both valid and invalid.

3. DO Automate Things Users will do Every Day

What is the primary function of your software? What is the typical path that a user will take when using your software? These are the kinds of activities that should have automated tests. Rather than running through a manual user path every morning, you can set your automated test to do it, and you’ll be notified right away if there is a problem.  

4. DO Automate Basic Smoke-Level Tests

I like to think of smoke-level tests as tests of features that we would be really embarrassed by if they failed in the field. One company where I worked early in my career had a search feature that was broken for weeks, and no one noticed because we hadn’t run a test on it. Unfortunately, the bug was pushed out to production and seen by customers. Automating these tests and running them with every build can help catch major problems quickly.

5. DO Automate Things that will Save your Time

A coworker of mine was testing a feature, which needed a completely new account set up each time the test was run. Rather than set it up manually every time, he created Automation that would set up a new account for him, saving his valuable time.

6. DO Automate Things that will Allow you to Exercise Lots of Different Options

A test that fills out a form by filling in all of the available fields is not completely testing the form. What if there is one missing field? What if there are 2 missing fields? What if one of those fields is required? With Automation, you can exercise many different combinations of form submission in much less time than it would take to do manually.

7. DO Automate Things that will Alert you when Something is Wrong

I have several negative tests in my API test suites that verify that a user can’t do something when they don’t have permission to do it. Recently some of those tests failed, alerting me to the fact that someone had changed the permission structure, and now a user was able to view content they shouldn’t.

8. DON’T Automate Tests that you Know will be Flaky

If you can’t come up with a way to run an automated test on a feature and have it pass consistently, you may want to run that test manually or find a different way to assert your results. When I was first getting started with Automation, I wanted to test that a feature sent an email and that the email was received. I discovered that email clients are tricky to test in the UI, because there’s no way of knowing how long it will take before the email is delivered. Instead, I verified that the email provider had sent the email, and occasionally did a manual check of the inbox to make sure that the email arrived.

9. DON’T Automate Tests for Features that are in the Early Stages and are Expected to go Through Many Changes

It’s great to write unit tests for new code, which as mentioned above is usually done by the developer. And automated services tests can be created before there is a UI for a new feature. But if you know that your API endpoints or your UI will be changing quite a bit as the story progresses, you may want to hold off on services or UI Automation until things have settled down a bit.  For the moment, manual testing will be your best strategy.

10. DON’T Automate Tests for Features that no one Cares About

Your application probably runs on a wide variety of browsers, and your inclination may be to run your tests on all of them. But it could be that only 1% of users are running your application on a certain browser. If that’s the case, why go through the stress of trying to run your tests on this browser?  Similarly, if there is a feature in your application that will be deprecated soon and only 1% of your users are using it, your time would be better spent automating another feature.

11. DON’T Automate Weird Edge Cases

There will always be bugs in software, but some will be more likely to be seen by users than others. You may be fascinated by the bug that is caused by going to a specific sequence of pages, entering non-UTF-8 characters, and then clicking the Back button 3 times in a row, but since it’s very unlikely that an end user will do this, it’s not worth your time to design an automated test for it.

12. DON’T Automate Bugs you are Sure will Never be Seen Again

I once worked with someone who felt that every bug found needed a corresponding test. This is not always the case. Some bugs are merely cosmetic and are unlikely to appear again. A good example of this is the typo. If a developer accidentally entered text that said “Contcat us” instead of “Contact us,” that was simply an oversight. No developer would ever go into the code and revert to the earlier misspelling, so there’s no need to automate a test that verifies that text.

Summary

Automated tests, when done well, provide fast feedback for developers, alert testers to problems well before they reach production, and free up testers to do more exploratory testing. But when Automation is done poorly, it results in tests that are not trusted and wasted time for everyone.

Kristin Jackvony
Kristin Jackvony discovered her passion for software testing after working as a music educator for nearly 2 decades. She has been a QA engineer, manager, and lead for the last eleven years and is currently the Principal Engineer for Quality at Paylocity. Her weekly blog, Think Like a Tester, helps software testers focus on the fundamentals of testing.

The Related Post

LogiGear Magazine September Test Automation Issue 2017
Trying to understand why fails, errors, or warnings occur in your automated tests can be quite frustrating. TestArchitect relieves this pain.  Debugging blindly can be tedious work—especially when your test tool does most of its work through the user interface (UI). Moreover, bugs can sometimes be hard to replicate when single-stepping through a test procedure. ...
Test Automation is significant and growing-yet I have read many forum comments and blog posts about Test Automation not delivering as expected. It’s true that test automation can improve reliability while minimizing variability in the results, speed up the process, increase test coverage, and ultimately provide greater confidence in the quality of the software being ...
Creative Director at the Software Testing Club, Rob Lambert always has something to say about testing. Lambert regularly blogs at TheSocialTester where he engages his readers with test cases, perspectives and trends. “Because It’s Always Been Done This Way” Study the following (badly drawn) image and see if there is anything obvious popping in to ...
LogiGear Magazine, December 2015: Test Automation
For those that are new to test automation, it can look like a daunting task to undertake For those who are new to Automation, it can look like a daunting task to undertake, but it only seems that way. If we unpack it and pinpoint the fundamentals, we can have the formula for the desired ...
5 roadblocks in vehicular autonomy that complicate Software Testing Experts in the field have previously referred to air travel as somewhat of a gold standard for autonomous vehicle safety, but after Boeing’s two tragedies, that analogy can no longer be used when talking about self-driving cars. This was after Boeing’s 737 MAX Jets have found ...
Looking for a solution to test your voice apps across devices and platforms? Whether you’re new or experienced in testing voice apps such as Alexa skill or Google Home actions, this article will give you a holistic view of the challenges of executing software testing for voice-based apps. It also explores some of the basic ...
Having developed software for nearly fifteen years, I remember the dark days before testing was all the rage and the large number of bugs that had to be arduously found and fixed manually. The next step was nervously releasing the code without the safety net of a test bed and having no idea if one ...
All too often, software development organizations look at automating software testing as a means of executing existing test cases faster. Very frequently there is no strategic or methodological underpinning to such an effort. The approach is one of running test cases faster is better, which will help to deliver software faster. Even in organizations that ...
Cross-Browser Testing is an integral part of the Software Testing world today. When we need to test the functionality of a website or web application, we need to do so on multiple browsers for a multitude of reasons.
Introduction Software Testing 3.0 is a strategic end-to-end framework for change based upon a strategy to drive testing activities, tool selection, and people development that finally delivers on the promise of Software Testing. For more details on the evolution of Software Testing and Software Testing 3.0 see: The Early Evolution of Software Testing Software Testing ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe