What Is Combinatorial Testing and Why Should Testers Care?

Developers of large data-intensive software often notice an interesting — though not surprising — phenomenon: When usage of an application jumps dramatically, components that have operated for months without trouble suddenly develop previously undetected errors. For example, the application may have been installed on a different OS-hardware-DBMS-networking platform, or newly added customers may have account records with an oddball combination of values that have not occurred before. Some of these rare combinations trigger failures that have escaped previous testing and extensive use. Such failures are known as interaction failures, because they are only exposed when two or more input values interact to cause the program to reach an incorrect result.

Combinatorial testing can help detect problems like this early in the testing life cycle. The key insight underlying this method is that not every parameter contributes to every failure and most failures are triggered by a single parameter value or interactions between a relatively small number of parameters. To detect interaction failures, software developers often use “pairwise testing”, in which all possible pairs of parameter values are covered by at least one test. Its effectiveness is based on the observation that software failures often involve interactions between parameters. For example, a router may be observed to fail only for a particular protocol when packet volume exceeds a certain rate, a 2-way interaction between protocol type and packet rate. Figure 1 illustrates how such a 2-way interaction may happen in code. Note that the failure will only be triggered when both pressure 300 are true.

Figure 1. 2-way interaction failure triggered only when two conditions are true.

But what if some failure is triggered only by a very unusual combination of 3, 4, or more sensor values? It is very unlikely that pairwise tests would detect this unusual case; we would need to test 3-way and 4-way combinations of values. But is testing all 4-way combinations enough to detect all errors? What degree of interaction occurs in real failures in real systems? Surprisingly, this question had not been studied when NIST began investigating interaction failures in 1999. Results showed that across a variety of domains, all failures could be triggered by a maximum of 4-way to 6-way interactions. As shown in Figure 2, the detection rate (y axis) increased rapidly with interaction strength (the interaction level t in t-way combinations is often referred to as strength). Studies by other researchers have been consistent with these results.

Figure 2. Error detection rates for interaction strengths 1 to 6

These results are interesting because they suggest that, while pairwise testing is not sufficient, the degree of interaction involved in failures is relatively low. Testing all 4-way to 6-way combinations may therefore provide reasonably high assurance. As with most issues in software, however, the situation is not that simple. Most parameters are continuous variables which have possible values in a very large range (+/- 232 or more). These values must be discretized to a few distinct values. In addition, we need to determine the correct result that should be expected from the system under test for each set of test inputs. But these challenges are common to all types of software testing, and a variety of good techniques have been developed for dealing with them.

Figure 3. 3-way covering array

A test suite to cover all 2-way, 3-way, etc. combinations is known as a covering array. An example is given in Figure 3, which shows a 3-way covering array for 10 variables with two values each. The interesting property of this array is that any three columns contain all eight possible values for three binary variables. For example, taking columns F, G, and H, we can see that all eight possible 3-way combinations (s000, 001, 010, 011, 100, 101, 110, 111) occur somewhere in the three columns together. In fact, any combination of three columns chosen in any order will also contain all eight possible values. Collectively, therefore, this set of tests will exercise all 3-way combinations of input values in only 13 tests, as compared with 1,024 for exhaustive coverage.

How can Combinatorial Methods be Used in Testing?

There are basically two approaches to combinatorial testing – use combinations of configuration parameter values, or combinations of input parameter values. In the first case, we select combinations of values of configurable parameters. For example, telecommunications software may be configured to work with different types of call (local, long distance, international), billing (caller, phone card, 800), access (ISDN, VOIP, PBX), and server for billing (Windows Server, Linux/MySQL, Oracle).

In the second approach, we select combinations of input data values, which then become part of complete test cases, creating a test suite for the application. For example, a word processing application may allow the user to select 10 ways to modify some highlighted text: subscript, superscript, underline, bold, italic, strikethrough, emboss, shadow, small caps, or all caps. Thorough testing requires that the font-processing function work correctly for all valid combinations of these input settings. But with 10 binary inputs, there are 210 = 1,024 possible combinations. But the empirical analysis reported above shows that testing all 3-way combinations may detect 90% or more of bugs. For a word processing application, testing that detects better than 90% of bugs may be a cost-effective choice. The covering array shown in Figure 3 could be used for this testing.

Conclusions

Using combinatorial methods for either configuration or input parameter testing can help make testing more effective at an overall lower cost. Although these methods do not apply to all applications, they can be a valuable addition to the tester’s toolbox.

Certain commercial products are identified in this document, but such identification does not imply recommendation by the US National Institute of Standards and Technology, nor does it imply that the products identified are necessarily the best available for the purpose.


(This article is condensed from Chapter 2 of Practical Combinatorial Testing, freely downloadable here)

Rick Kuhn
Rick Kuhn is a computer scientist at the Computer Security Division of the US National Institute of Standards and Technology (NIST).Raghu Kacker is a mathematical statistician at the Mathematical and Computational Sciences Division of NIST.Yu Lei is an associate professor at the Department of Computer Science and Engineering at the University of Texas at Arlington.

The Related Post

How lagging automotive design principles adversely affect final products. Cars are integrating more and more software with every model year. The ginormous screen introduced by Tesla in their flagship Model S a few years ago was seemingly unrivaled at the time. Nowadays, screens of this size are not only commonplace in vehicles such as the ...
People who know me and my work probably know my emphasis on good test design for successful test automation. I have written about this in “Key Success Factors for Keyword Driven Testing“. In the Action Based Testing (ABT) method that I have pioneered over the years it is an essential element for success. However, agreeing ...
LogiGear Magazine – September 2010
The path to continuous delivery leads through automation Software testing and verification needs a careful and diligent process of impersonating an end user, trying various usages and input scenarios, comparing and asserting expected behaviours. Directly, the words “careful and diligent” invoke the idea of letting a computer program do the job. Automating certain programmable aspects ...
LogiGear Magazine – The Big Testing Issue – April 2012
The following is a transcript of a May 7, 2008 interview with Hung Q. Nguyen, founder and CEO of LogiGear Corporation and coauthor of the best selling textbook Testing Computer Software. Interviewer: When it comes to software testing, what concerns or issues are you hearing from software developers? Hung Q. Nguyen: The most pressing concern ...
I recently came back from the Software Testing & Evaluation Summit in Washington, DC hosted by the National Defense Industrial Association. The objective of the workshop is to help recommend policy and guidance changes to the Defense enterprise, focusing on improving practice and productivity of software testing and evaluation (T&E) approaches in Defense acquisition.
Regardless of your current state of tools, building an effective Continuous Integration suite of significant automated regression tests is the key to moving to a higher level of confidence in today’s development world. In the evolution timeline of software development tools, new tools have recently proliferated. We have all been sold on collaboration, transparency and ...
LogiGear Magazine – April 2014 – Test Tool and Automation
LogiGear Magazine – March 2011 – The Agile Test Automation Issue
LogiGear Magazine – October 2010
Introduction A characteristic of data warehouse (DW) development is the frequent release of high-quality data for user feedback and acceptance. At the end of each iteration of DW ETLs (Extract-Transform-Load), data tables are expected to be of sufficient quality for the next ETL phase. This objective requires a unique approach to quality assurance methods and ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe