Making Big Testing a Big Success

There is no one recipe to make big testing a big success. It takes planning and careful execution of the various aspects, like test design, infrastructure and organization – a mix that can be different for each situation in which you may find yourself.

In writing about big testing, the first question that comes up is, “What is ‘big’?” In this article, “big” means, well, a big test.

Now, that does not necessarily mean a big system under test. There are, in fact, a number of factors that can make for a substantial testing effort. They include:

  • Big volume of tests – for example, targeting a large system under test
  • Concurrent variation of code branches
  • Complex functionalities, needing many test situations or values
  • Big variation in target platforms and configurations
  • Big maintenance: rapid and frequent changes to the SUT
  • Big or complex hardware infrastructure needed to support testing

“Big” is also a relative measure. It can mean different things for different organizations. For one company, 1000 test cases could be big, while another company may be looking at 1000 weeks of testing. Testing on ten machines at the same time can seem big, while another setup may involve a data center of ten acres. One definition of “big” could be that the size of the tests is not trivial: you need to think about how to organize and manage them. In other words: to make big testing a big success, it takes more than just making test cases.

In this overview I will describe three factors that I feel are key to big testing:

  • Test design and Automation
  • Infrastructure
  • Organization

Test Design and Automation

When I have to deal with large or complex testing projects, I always look at overall test design first, since that is where the biggest potential gains lie. Organizing your tests can keep their sizes more manageable, and leads to a much better Automation result. I will discuss Automation first, then look at a way to best structure and design your tests.

For almost all big tests, at least the test execution will be automated. Automation is not a must: to be sure, human testing can have distinct advantages over automated tests. This is particularly the case if tests needn’t be repeated much (For more on this, see the literature about exploratory testing).

Several techniques have been developed for test Automation. The three most common models are record & playback, scripting and keywords.

Record & playback can be useful to get to know your test tool and UI under test, and to quickly capture key steps in the Automation. Note that this technique should not be used on its own for large scale test development; rather, recordings should be reused in functions or actions.

Scripting is often based on frameworks, like libraries developed in-house, or public ones like Selenium, a popular open source framework.

Scripting is an effective way to build Automation, but difficult to scale up: the scripts need technical experience to build, and tend to become bulky and inaccessible pieces of software when scaled.

Of the three more common Automation techniques, the keywords technique, in my experience, is the way to go for larger scale testing. For one thing, this system allows more people to get involved. By encapsulating elements of the Automation in keywords, the technique offers a natural way to keep unneeded details hidden from a test design, which in turn keeps large tests more maintainable.

However, keywords alone do not constitute a method, and by themselves are not sufficient to achieve success. One method that builds on keywords is Action Based Testing (ABT). ABT places the focus on how tests can best be organized and designed to accommodate their effective Automation. In ABT, tests are kept as a series of actions in a test module. A test module looks essentially like a spreadsheet. Each action is written as a line in the sheet, starting with an action keyword followed by arguments.

A crucial step to get big testing under control with ABT is a good plan for the test modules, which takes an outline form, much like the table of contents in a book. Each test module should have a clear, unambiguous scope that is well-differentiated from the other test modules. Once the test modules have been identified, they can be developed one by one over time. The scope of each test module defines what kinds of actions to use and what kinds of checks to perform.

Of particular importance is that one avoid shaving different kinds of tests within the confines of one test module. A simple example can be taken from a recent project I reviewed. Take a set of tests on a table that presents data within a dialog. Some of the tests may involve manipulating the table, like sorting on columns or cutting and pasting rows. Other tests might address the data in the table: does it have the right values? If you were to place both such tests in the same test module, or test case, your test becomes hard to read. Moreover, it has to change when either the table navigation changes or the value calculations change. In smaller test projects, you may get away with this. But if the tests number in the thousands, or greater, you will soon find that this sort of organization makes it hard to achieve a smooth and stable Automation, especially across multiple versions of the application under test.

The scope of a test module also defines how much detail you want to see in the tests, and therefore what actions you will use. In a project I looked at a few years ago, a system for bank tellers was tested. In that system the daily teller cycle would begin with obtaining an initial transaction number, which itself took several UI steps to complete. The QA manager initially insisted on having this procedure spelled out and verified step-by-step as the first part of each and every test module just, in his words, “to make sure”. As a consequence, even a small change in the application necessitated many adjustments to the tests. Only later on was the procedure encapsulated in a high level action called, aptly enough, “get initial transaction number”. As a consequence, a large share of the maintenance problems evaporated.

The Automation process focuses on automating actions. Some actions may already be predefined in the tool, while higher level actions need to be recorded or developed. In the Automation there are a number of things you can do to speed development and, even more effective, make the Automation stable. Your developers can help by making UI elements, such as windows, controls or HTML elements, easy to identify. They can do this by assigning values to certain properties, like the “id” attribute in an HTML element, which cannot be seen by a user but can be seen by an Automation tool. Mapping a UI then becomes something that can be performed manually and rapidly, without the need for “spy” tools.

Additionally, timing needs your attention. In large test runs on large systems, responses can vary wildly, and you certainly don’t want a timeout to derail a big test run.

Conversely, you don’t want to slow things down due to needlessly long wait times in many places in your test. Make sure you always have something your tooling can detect and wait for. And test your actions with their own test modules, before you run the regular test modules.


Once your tests and Automation are well-designed and stable, your infrastructure basically defines how much you can do in a given amount of time. This is an area where organizations and projects can differ greatly from one another. Some companies may have substantial infrastructure in place; in others, projects may have to be built up from scratch.

As a rule of thumb, you should avoid using the machines of the test developers to also run the tests, since this may inhibit their productivity. You can either give each tester a second machine, or – probably better – run the tests on dedicated servers.

Note that although you may work with dedicated machines as servers, you may not necessarily need server hardware. Most testing is performed against user interfaces, and requires client machines with client operating systems. In particular, blade server solutions tend to be expensive, and often come with features, like load balancing for web servers, that you do not need for your playback Automation.

One good way to organize execution is virtualization – the use of virtual machines. A virtual machine is a tester’s dream. It can be set up to mimic a variety of configurations much more readily than can a physical machine. You can set up a virtual machine image with a specific operating system, a version of the application under test, and even some base data for that application already defined. Then run one or more instances of that image every time you want to test against that configuration.

One physical machine can often run multiple virtual machines in parallel, expediting your test cycle. However, when planning your infrastructure, keep in mind that automated test runs tend to place a higher load on virtual machines than does human usage. Automated runs perform operations continuously, while humans tend to be “interval-intensive”. Still, it is not uncommon to have five or more virtual machines running comfortably on an inexpensive physical machine. Hardware properties that are commonly recommended for such a physical machine are: gobs of memory, multiple processors and/or multi-core processors, hardware virtualization support, and a second (physical) disk dedicated to the virtual machines it hosts.

Cloud-like infrastructures also come to mind for handling large test executions. If your company has such an infrastructure, getting a substantial part of it allocated to testing is largely a matter of making a business case for it. There are also “public cloud” vendors to consider. These can be of particular help if the need for a large test execution is only for a limited period of time. For continuous use, public clouds tend to be more costly than an in-house virtualization solution.


Regardless of infrastructure, the success of a big testing project may hinge on the team or teams involved. They have to develop the tests, automate them, manage their execution and follow up on results.

A big testing project has two facets that the teams must be able to handle:

  • A test design and development side, with a focus on creativity, effectiveness, and efficient Automation; and
  • A production side, with a focus on planning, volume and timelines. This demands an industrial “get it done” attitude from the team, which is challenging in its own way.

In a scrum organization, I expect that agile teams can handle most if not all of the big testing needs. It helps to have test execution and system development in the same team, so that any issues that emerge during large tests can be addressed swiftly. However, it is important that the team also include the requisite skills and focus in regards to the managerial aspects of planning and conducting large tests.


In my experience, testing is a profession. There exists a wide variety of techniques that experienced testers can draw from to make tests lean and mean. Sadly, it is not uncommon to see organizations assume that anybody, in particular a developer, can be a tester without much training. From what I’ve seen, however, this mindset results in tests that are simplistic and uninteresting. This leads to shallow testing, and also to large cluttered test sets that are hard to achieve stable Automation for. It behooves any organization not to underestimate a profession that has been developing for many years, with many publications and conferences to show for it.

Organizations which are faced with a need for large scale testing or large scale Automation will often look at off-shoring as a way to more easily scale up and scale down based on project needs. This can be especially helpful if a large effort is needed for a relatively short period of time. However, it needs to be done with care! Big teams may be able to make big tests, but those are not always good tests. Be sure to do good planning and organization of the tests before starting, and to continuously pay attention to effective design and Automation architecture.

There is no one recipe to make big testing a big success. It takes planning and careful execution of the various aspects, like test design, infrastructure and organization – a mix that can be different for each situation in which you may find yourself.

Hans Buwalda
Hans Buwalda, CTO of LogiGear, is a pioneer of the Action Based and Soap Opera methodologies of testing and automation, and lead developer of TestArchitect, LogiGear’s keyword-based toolset for software test design, automation and management. He is co-author of Integrated Test Design and Automation, and a frequent speaker at test conferences.

The Related Post

Elfriede Dustin of Innovative Defense Technology, is the author of various books including Automated Software Testing, Quality Web Systems, and her latest book Effective Software Testing. Dustin discusses her views on test design, scaling automation and the current state of test automation tools. LogiGear: With Test Design being an important ingredient to successful test automation, ...
Test Automation is significant and growing-yet I have read many forum comments and blog posts about Test Automation not delivering as expected. It’s true that test automation can improve reliability while minimizing variability in the results, speed up the process, increase test coverage, and ultimately provide greater confidence in the quality of the software being ...
This article was developed from concepts in the book Global Software Test Automation: Discussion of Software Testing for Executives. Introduction There are many potential pitfalls to Manual Software Testing, including: Manual Testing is slow and costly. Manual tests do not scale well. Manual Testing is not consistent or repeatable. Lack of training. Testing is difficult ...
When Netflix decided to enter the Android ecosystem, we faced a daunting set of challenges: 1. We wanted to release rapidly (every 6-8 weeks). 2. There were hundreds of Android devices of different shapes, versions, capacities, and specifications which need to playback audio and video. 3. We wanted to keep the team small and happy. ...
An Overview of Four Methods for Systematic Test Design Strategy Many people test, but few people use the well-known black-box and white-box test design techniques. The technique most used, however, seems to be testing randomly chosen valid values, followed by error guessing, exploratory testing and the like. Could it be that the more systematic test ...
We’ve scoured the internet to search for videos that provide a wealth of knowledge about Test Automation. We curated this short-list of videos that cover everything from the basics, to the more advanced, and why Test Automation should be part of part of any software development organization. Automation Testing Tutorial for Beginners This tutorial introduces ...
All too often, software development organizations look at automating software testing as a means of executing existing test cases faster. Very frequently there is no strategic or methodological underpinning to such an effort. The approach is one of running test cases faster is better, which will help to deliver software faster. Even in organizations that ...
Learn how to leverage TestArchitect and Selenium for turnkey, Automated Web testing. TestArchitect lets you create, manage, and run web-based automated tests on different types of browsers—using either a WebDriver or non-WebDriver technique. In this article, we will explore employing WebDriver for testing a web-based application with TestArchitect. TestArchitect with WebDriver is a tool for automating ...
5 roadblocks in vehicular autonomy that complicate Software Testing Experts in the field have previously referred to air travel as somewhat of a gold standard for autonomous vehicle safety, but after Boeing’s two tragedies, that analogy can no longer be used when talking about self-driving cars. This was after Boeing’s 737 MAX Jets have found ...
The Cloud demands that we be as nimble as possible, delivering features and fixes in almost real-time fashion. Both customer and provider rely on software development that can maintain quality while being light on its feet and constantly moving. In addition, Cloud-oriented systems tend to be highly complex and dynamic in structure — more than ...
People who know me and my work probably know my emphasis on good test design for successful test automation. I have written about this in “Key Success Factors for Keyword Driven Testing“. In the Action Based Testing (ABT) method that I have pioneered over the years it is an essential element for success. However, agreeing ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news