How To Decide What To Automate: Do’s And Don’ts

The 12 Do’s and Don’ts of Test Automation

When I started my career as a Software Tester a decade ago, Test Automation was viewed with some skepticism.

Automated tests were hard to set up, time-consuming to run, and provided unreliable results.

But since then, huge advances have been made, and now the choice not to automate tests would be seen as extremely foolhardy. I believe this is due to 2 changes: first, the move to test closer to the code using unit tests and services tests, and second, the availability of more reliable test tools.

However, this does not mean that everything should be automated. Having too many tests can be a detriment to good software development. Tests that take too long to run slow down the feedback that developers get when they commit their code, and therefore slow down the entire development process. And tests that are flaky or take too much time to fix result in a distrust of the tests, which means that they won’t be run at all or that their results will be ignored.  

Therefore, the first step in Automation success is knowing what tests to automate and what not to automate.

Here are some guidelines:

1. DO Automate Tasks as Close to the Code as Possible

Unit tests are so important, because they exercise code functionality without touching any dependency. If a developer makes a change that results in a hole in logic, that hole will be detected before the change makes it to the tester, saving everyone valuable time. A popular trend today is TDD, or Test-Driven Development. This is where the developer writes the unit tests before they write the code, ensuring that they begin by thinking about all the possible use cases for the software before solving the technical challenges of writing the code.

Services tests are also extremely valuable, because they test the functionality of the software without going through the UI. Many of today’s applications use APIs that make REST requests to the server. When going through the UI, there are often limitations to what a user can do, but when going directly through the API, more test cases can be executed. For example, a UI might limit the type of characters that can be entered in a form field, but there may not be server-side validation on the input, which could be discovered through API testing.

2. DO Automate Repetitive Tasks

Some tests are so important that they need to run repeatedly. A perfect example of this is the login test: if your users can’t log into the application, you have a real customer service problem! But no one wants to spend time manually logging into an application again and again. Automating your login tests ensures that authentication is tested with a wide variety of users and accounts, both valid and invalid.

3. DO Automate Things Users will do Every Day

What is the primary function of your software? What is the typical path that a user will take when using your software? These are the kinds of activities that should have automated tests. Rather than running through a manual user path every morning, you can set your automated test to do it, and you’ll be notified right away if there is a problem.  

4. DO Automate Basic Smoke-Level Tests

I like to think of smoke-level tests as tests of features that we would be really embarrassed by if they failed in the field. One company where I worked early in my career had a search feature that was broken for weeks, and no one noticed because we hadn’t run a test on it. Unfortunately, the bug was pushed out to production and seen by customers. Automating these tests and running them with every build can help catch major problems quickly.

5. DO Automate Things that will Save your Time

A coworker of mine was testing a feature, which needed a completely new account set up each time the test was run. Rather than set it up manually every time, he created Automation that would set up a new account for him, saving his valuable time.

6. DO Automate Things that will Allow you to Exercise Lots of Different Options

A test that fills out a form by filling in all of the available fields is not completely testing the form. What if there is one missing field? What if there are 2 missing fields? What if one of those fields is required? With Automation, you can exercise many different combinations of form submission in much less time than it would take to do manually.

7. DO Automate Things that will Alert you when Something is Wrong

I have several negative tests in my API test suites that verify that a user can’t do something when they don’t have permission to do it. Recently some of those tests failed, alerting me to the fact that someone had changed the permission structure, and now a user was able to view content they shouldn’t.

8. DON’T Automate Tests that you Know will be Flaky

If you can’t come up with a way to run an automated test on a feature and have it pass consistently, you may want to run that test manually or find a different way to assert your results. When I was first getting started with Automation, I wanted to test that a feature sent an email and that the email was received. I discovered that email clients are tricky to test in the UI, because there’s no way of knowing how long it will take before the email is delivered. Instead, I verified that the email provider had sent the email, and occasionally did a manual check of the inbox to make sure that the email arrived.

9. DON’T Automate Tests for Features that are in the Early Stages and are Expected to go Through Many Changes

It’s great to write unit tests for new code, which as mentioned above is usually done by the developer. And automated services tests can be created before there is a UI for a new feature. But if you know that your API endpoints or your UI will be changing quite a bit as the story progresses, you may want to hold off on services or UI Automation until things have settled down a bit.  For the moment, manual testing will be your best strategy.

10. DON’T Automate Tests for Features that no one Cares About

Your application probably runs on a wide variety of browsers, and your inclination may be to run your tests on all of them. But it could be that only 1% of users are running your application on a certain browser. If that’s the case, why go through the stress of trying to run your tests on this browser?  Similarly, if there is a feature in your application that will be deprecated soon and only 1% of your users are using it, your time would be better spent automating another feature.

11. DON’T Automate Weird Edge Cases

There will always be bugs in software, but some will be more likely to be seen by users than others. You may be fascinated by the bug that is caused by going to a specific sequence of pages, entering non-UTF-8 characters, and then clicking the Back button 3 times in a row, but since it’s very unlikely that an end user will do this, it’s not worth your time to design an automated test for it.

12. DON’T Automate Bugs you are Sure will Never be Seen Again

I once worked with someone who felt that every bug found needed a corresponding test. This is not always the case. Some bugs are merely cosmetic and are unlikely to appear again. A good example of this is the typo. If a developer accidentally entered text that said “Contcat us” instead of “Contact us,” that was simply an oversight. No developer would ever go into the code and revert to the earlier misspelling, so there’s no need to automate a test that verifies that text.

Summary

Automated tests, when done well, provide fast feedback for developers, alert testers to problems well before they reach production, and free up testers to do more exploratory testing. But when Automation is done poorly, it results in tests that are not trusted and wasted time for everyone.

Kristin Jackvony
Kristin Jackvony discovered her passion for software testing after working as a music educator for nearly 2 decades. She has been a QA engineer, manager, and lead for the last eleven years and is currently the Principal Engineer for Quality at Paylocity. Her weekly blog, Think Like a Tester, helps software testers focus on the fundamentals of testing.

The Related Post

I’ve been teaching a lot lately, was in India for one week, and I’m off to Seattle in two weeks to teach on performance topics. I thoroughly enjoy teaching, it allows me to stay sharp with current trends, and provides a nice break from the “implementation focus” that I generally have day to day.
LogiGear Magazine September Issue 2020: Testing Transformations: Modernizing QA in the SDLC
When automated tests are well-organized and written with the necessary detail, they can be very efficient and maintainable. But designing automated tests that deal with data can be challenging if you have a lot of data combinations. For example, let’s say we want to simulate a series of 20 customers, along with the number of ...
The challenges with any automation effort is to know your capability. I’ve seen too many automation efforts begin and end with a tool decision. Generally these tools are very complex pieces of software that do many more things then we would ever use in our normal everyday testing. It even adds more misery to the ...
Test Strategy A test strategy describes how the test effort will reach the quality goals set out by the development team. Sometimes called the test approach, test strategy includes, among other things, the testing objective, methods and techniques of testing and the testing environment.
Cross-Browser Testing is an integral part of the Software Testing world today. When we need to test the functionality of a website or web application, we need to do so on multiple browsers for a multitude of reasons.
Introduction All too often, senior management judges Software Testing success through the lens of potential cost savings. Test Automation and outsourcing are looked at as simple methods to reduce the costs of Software Testing; but, the sad truth is that simply automating or offshoring for the sake of automating or offshoring will only yield poor ...
Looking for a solution to test your voice apps across devices and platforms? Whether you’re new or experienced in testing voice apps such as Alexa skill or Google Home actions, this article will give you a holistic view of the challenges of executing software testing for voice-based apps. It also explores some of the basic ...
When Netflix decided to enter the Android ecosystem, we faced a daunting set of challenges: 1. We wanted to release rapidly (every 6-8 weeks). 2. There were hundreds of Android devices of different shapes, versions, capacities, and specifications which need to playback audio and video. 3. We wanted to keep the team small and happy. ...
This article was adapted from a presentation titled “How to Optimize Your Web Testing Strategy” to be presented by Hung Q. Nguyen, CEO and founder of LogiGear Corporation, at the Software Test & Performance Conference 2006 at the Hyatt Regency Cambridge, Massachusetts (November 7 – 9, 2006). Click here to jump to more information on ...
March Issue 2020: Smarter Testing Strategies for The Modern SDLC
Training has to be fun. Simple as that. To inspire changed behaviors and adoption of new practices, training has to be interesting, motivating, stimulating and challenging. Training also has to be engaging enough to maintain interest, as trainers today are forced to compete with handheld mobile devices, interruptions from texting, email distractions, and people who think they ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe