Principles for Agile Test Automation

I feel like I’ve spent most of my career learning how to write good automated tests in an Agile environment. When I downloaded JUnit in the year 2000 it didn’t take long before I was hooked – unit tests for everything in sight. That gratifying green bar is near-instant feedback that everything is going as expected, my code does what I intended and I can continue developing from a firm foundation.

Later, starting in about 2002, I began writing larger granularity tests, for whole subsystems; functional tests if you like. The feedback that my code does what I intended, and that it has working functionality has given me confidence time and again to release updated versions to end-users.

I was not the first to discover that developers design automated functional tests for two main purposes. Initially we design them to help clarify our understanding of what to build. In fact, at that point, they’re not really tests, we usually call them scenarios, or examples. Later, the main purpose of the tests becomes to detect regression errors, although we continue use them to document what the system does.

When you’re designing a functional test suite, you’re trying to support both aims, and sometimes you have to make tradeoffs between them. You’re also trying to keep the cost of writing and maintaining the tests as low as possible, and as with most software, it’s the maintenance cost that dominates. Over the years, I’ve begun to think in terms of four principles that help me to design functional test suites that make good tradeoffs and identify when a particular test case is fit for purpose.

Readability

When you look at the test case, you can read it through and understand what the test is for. You can see what the expected behavior is, and what aspects of it are covered by the test. When the test fails, you can quickly see what is broken.
If your test case is not readable, it will not be useful, neither for understanding what the system does, nor identifying regression errors. When it fails, you will have to dig though other sources outside of the test case to find out what is wrong. You may not understand what is wrong and you will rewrite the test to check for something else, or simply delete it.

Robustness

When a test fails, it means there is a regression error, (functionality is broken), or the system has changed and the tests no longer document the correct behavior. You need to take action to correct the system or update the test, and this is as it should be. If however, the test has failed for no good reason, you have a problem: a fragile test.

There are many causes of fragile tests. For example, tests that are not isolated from one another, duplication between test cases, and dependencies on random or threaded code. If you run a test by itself and it passes, but fails in a suite together with other tests, then you have an isolation problem. If you have one broken feature and it causes a large number of test failures, you have duplication between test cases. If you have a test that fails in one test run, then passes in the next when nothing changed, you have a flickering test.

If your tests often fail for no good reason, you will start to ignore them. Quite likely there will be real failures hiding amongst all the false ones, and the danger is you will not see them.

Speed

As an Agile developer you run your test suite frequently. Both (a) every time you build the system, (b) before you check in changes, and (c) after check-in in an automated Continuous Integration environment. I recommend time limits of 2 minutes for (a), 10 minutes for (b), and 60 minutes for (c). This fast feedback gives you the best chance of actually being willing to run the tests, and to find defects when they’re cheapest to fix, soon after insertion.

If your test suite is slow, it will not be used. When you’re feeling stressed, you’ll skip running them, and problem code will enter the system. In the worst case, the test suite will never become green. You’ll fix the one or two problems in a given run and kick off a new test run, but in the meantime you’ll continue developing and making other changes. The diagnose-and-fix loop gets longer and the tests become less likely to ever all pass at the same time.

Updatability

When the needs of the users change, and the system is updated, your tests also need to be updated in tandem. It should be straightforward to identify which tests are affected by a given change, and quick to update them all.

If your tests are not easy to update, they will likely get left behind as the system moves on. Faced with a small change that causes thousands of failures and hours of work to update them all, you’ll likely delete most of the tests.

Following these four principles implies Maintainability

Taken all together, I think how well your tests adhere to these principles will determine how maintainable they are, or in other words, how much they will cost. That cost needs to be in proportion to the benefits you get: helping you understand what the system does, and regression protection.

As your test suite grows, it becomes ever more challenging to adhere to all the principles. Readability suffers when there are so many test cases you can’t see the forest for the trees. The more details of your system that you cover with tests, the more likely you are to have Robustness problems – tests that fail when these details change. Speed obviously also suffers – the time to run the test suite usually scales linearly with the number of test cases. Updatability doesn’t necessarily get worse as the number of test cases increases, but it will if you don’t adhere to good design principles in your test code, or lack tools for bulk update of test data for example.

I think the principles are largely the same whether you’re writing skinny little unit tests or fatter functional tests that touch more of the codebase. My experience tells me that it’s a lot easier to be successful with unit tests. As the testing thickness increases, the feedback cycle gets slower, and your mistakes are amplified. That’s why I concentrate on teaching these principles through unit testing exercises. Once you understand what you’re aiming for, you can transfer your skills to functional tests.

How can you use these principles?

I find it useful to remember these principles when designing test cases. I may need to make trade-offs between them, and it helps just to step back and assess how I’m doing on each principle from time to time as I develop. If I’m reviewing someone else’s test cases, I can point to code and say which principles it’s not following, and give them concrete advice about how to make improvements. We can have a discussion for example about whether to add more test cases in order to improve regression protection, and how to do that without reducing overall readability. I also find these principles useful when I’m trying to diagnose why a test suite is not being useful to a development team, especially if things have got so bad they have stopped maintaining it. I can often identify which principle(s) the team has missed, and advise how to refactor the test suite to compensate.

For example, if the problem is lack of speed you have some options and tradeoffs to make:

  • Replace some of the thicker, slower end-to-end tests with lots of skinny fast unit tests, (may reduce regression protection).
  • Invest in hardware and run tests in parallel (costs $).
  • Use a profiler to optimize the tests for speed the same as you would production code (may affect Readability).
  • Use more fakes to replace slow parts of the system (may reduce regression protection).
  • Identify key test cases for essential functionality and remove the other test cases. (sacrifice regression protection to get Speed).

Strategic Decisions

These principles also help me when I’m discussing automated testing strategy, and choosing testing tools. Some tools have better support for updating test cases and test data. Some allow very Readable test cases. It’s worth noting that automated tests in Agile are quite different from in a traditional process, since they are run continually throughout the process, not just at the end. I’ve found many traditional automation tools don’t lead to enough speed and Robustness to support Agile development.

I hope you will find these principles help you to reason about your strategy and tools for functional automated testing, and to design more maintainable, useful test cases.

Emily Bache
Emily Bache is an independent consultant specializing in automated testing and Agile methods. With over 15 years of experience working as a software developer in organizations as diverse as multinational corporation to small startup, she has learnt to value the technical practices that underpin truly Agile teams. Emily is the author of “The Coding Dojo Handbook: a practical guide to creating a space where good programmers can become great programmers” and speaks regularly at international conferences such as Agile Testing Days and XP2013.

The Related Post

Cross-Browser Testing is an integral part of the Software Testing world today. When we need to test the functionality of a website or web application, we need to do so on multiple browsers for a multitude of reasons.
We’re celebrating the 1st birthday of our Agile eBook! It has been one year since we launched our eBook on Agile Automation. To celebrate, we’ve updated the foreword and included a brand new automation checklist! As we take the moment to mark this occasion, we wanted to take some time to reflect on the State ...
Identifying which tests to begin with when starting automation is key to driving testing cycle times down and coverage up. So there you are. You’ve done a little research and made the business case to upper management regarding test automation and they bit on the proposal. Surprisingly, they supported you all the way and are extremely ...
I got some comments on my post “Test Everything all the Time” — most notably people commenting that it’s impossible to test “everything”. I can’t agree more. The intention of the post was to make the point that we need to be able to test “everything we can” all the time. That is, you should ...
The success of Automation is often correlated to its ROI. Here are 5 KPIs that we find universally applicable when it comes to quanitfying your Test Automation.
From automotive Software Testing standards, testing techniques, and process, this article is an in-depth guide for those looking to transfer their existing skills to this exciting industry. For the Software Car, autonomous driving gets most of the hype, but most overlook the fact that there is so much more to Software Testing for the automotive ...
There is no one recipe to make big testing a big success. It takes planning and careful execution of the various aspects, like test design, infrastructure and organization – a mix that can be different for each situation in which you may find yourself. In writing about big testing, the first question that comes up ...
With the new year just around the corner, here’s a look at the Test Automation trends that have the potential to dominate. DevOps is being relied upon more than ever. With there being strong Market Drivers for the adoption of DevOps, the need for Test Automation has also never been greater. But what’s next after ...
All too often, software development organizations look at automating software testing as a means of executing existing test cases faster. Very frequently there is no strategic or methodological underpinning to such an effort. The approach is one of running test cases faster is better, which will help to deliver software faster. Even in organizations that ...
Introduction A characteristic of data warehouse (DW) development is the frequent release of high-quality data for user feedback and acceptance. At the end of each iteration of DW ETLs (Extract-Transform-Load), data tables are expected to be of sufficient quality for the next ETL phase. This objective requires a unique approach to quality assurance methods and ...
Mobile usage today is not just a trend but it is an essential shift in how people communicate with each other, interact with the world, and do business. According to a ComScore, in 2014 the number of mobile users surpassed the number of computer users and is showing strong growth over time, towards some point in ...
Two dominant manual testing approaches to the software testing game are scripted and exploratory testing. In the test automation space, we have other approaches. I look at three main contexts for test automation: 1. Code context – e.g. unit testing. 2. System context – e.g. protocol or message level testing. 3. Social context – e.g. ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe