TestStorming™: Build a Collaborative Approach to Software Test Design in 11 Easy Steps


There are many ways to approach test design. These approaches range from checklists to very precise algorithms in which test conditions are combined to achieve the most efficiency in testing.

There are situations, such as in testing mobile applications, complex systems and cyber security, where tests need to be creative, cover a lot of functionality, and go beyond what may be described in a requirements document, use case or user story.

Over the last thirty years or more, a variety of test design techniques have been described in books and training courses. These techniques include boundary-value analysis, decision tables, requirements-based testing and so on. Each of these approaches has upsides and downsides which require the test analyst to fully understand the limitations and requirements of the techniques used in a particular situation.

In some cases, test design techniques can be combined to achieve a higher level of test coverage, as well as gaining different angles or perspectives on the software to be tested.

One thing I have learned in my many years experience as a test designer is that test design is a very nuanced activity. Textbook techniques are fine and needed, but a small change in how the technique is applied can have a huge impact in the effectiveness of the tests. This is one reason why training in how to apply test design techniques is so important. However, training is just the starting point for good test design.

Another thing I have observed is that many times, test design is performed by one person, working from documents or other items. This means the test designer is often working in isolation.

When test design is conducted in isolation, it is easy to miss important perspectives and conditions that should be tested. A collaborative approach to test design has many benefits:

  • Shared problem solving ability
  • Increased knowledge of the application to be tested
  • Increased knowledge and experience in test design
  • Higher levels of ownership in the test design activity
  • Greater creativity in ways to create tests
  • More objectivity in prioritizing tests
  • Better communication about the project, the tests and the problems to be solved

In this article, I describe a technique I have found simple and effective in designing great tests quickly. This technique combines the world of test design with that of brainstorming, so I call this technique “TestStorming™”.

The process

TestStorming™ doesn’t have to be complicated or regimented; however, there are some key steps that help in getting off to a good start.

Step 1–Decide on How to Capture Input

This process is so easy it can be performed just using a whiteboard. However, there are some great tools, such as mind mapping (see article in this issue) tools that can add organization and the ability to distribute the information gathered in the session.

One such tool is FreeMind, an open source mind mapping tool that works well in both Mac and Windows environments.

Another free tool is XMind.

Step 2–Define the Scope of the Session

This includes both the time to be spent in the session and the number of features to be addressed.

Step 3–Define the Focus of the Session

Having a particular focus prevents the team from digressing into areas that may have limited value or importance in the test. For example, the focus of a TestStorming™ session might be the security aspects of a mobile device. In fact, the focus could be to design tests for the secure ordering process using a mobile device.

Step 4–Assign a Leader

Without a leader, the session can drift into non-productive discussions. Also, the leader may need to “prime the pump” of ideas by coming up with some initial suggestions. Sometimes, a session may stall. This requires a leader to inject some remarks or ideas to get more ideas.

The leader should be someone with knowledge of both the subject matter and testing. The leader should also know how to facilitate dynamic group discussions, mediate disputes and keep a session on track.


Step 5–Invite the Participants

Like many other activities, such as reviews and retrospectives, the success of a TestStorming™ session depends on the quality of the people in the room. You want people who are:

  • Creative thinkers
  • Critical thinkers
  • Knowledgeable about the context of how the application will be used
  • Knowledgeable about organizational goals for the application
  • Understanding of different types of users
  • Knowledgeable about test design (you want people who can think of strong tests)
  • Courteous to others in the session
  • Able to contribute ideas (otherwise, they are not really needed in the session)

Another way to identify participants is by role, such as:

  • Testers
  • Developers
  • Users

Step 6–Conduct the Session

In the session, several things are discussed at the outset, including:

  • The basis for test design, such as requirements, use cases, user stories, user experience, and the application itself
  • Known risks
  • Known issues
  • Past defect trends
  • The nature of the application and specific features to be tested
  • The context of how the application is used
  • The objective of the session

Once the background is understood, participants contribute their ideas for test design. This can be done in a variety of ways:

Answering context-free questions, such as:

  •  What are the most important things to be tested?
  •  What are the least important things to be tested?
  •  Where is the risk in the features to be tested?
  •  Which unusual events could occur?
  •  Which events would be most commonly performed by the average user?
  • Each person writing their “top 5” tests on index cards, from which the leader will use as input for the list of conditions to test.
  • Going around the room in a round-robin format, making suggestions about conditions to test. This allows people to build on other people’s ideas.

One of the most basic rules of brainstorming is to not restrict the ideas that are suggested. This rule is also adhered to in TestStorming™. The goal is to get as many ideas as possible on the table, board, or tool. Then, they can be refined.

Step 7 – Filter, Refine and Combine Tests

This is where the magic happens. After the team has contributed their ideas and it is apparent that the flow of input is slowing down, the leader asks the team to take a step back and look at the ideas put forth.

From the test ideas suggested, the team organizes them into categories such as:

  • Functions
  • Risks
  • User personas

During the categorization process, some tests may be eliminated, others may be combined and enhanced.

One of my favorite test design methods is what I call “toggling” conditions. For example, if someone suggests testing a new customer placing an order, create a test condition for an existing customer placing an order. Toggling allows for ideas to grow exponentially by expanding on existing test conditions.

Step 8 – Conclude the Session

At the end of the session, the leader should recap the tests suggested. For the sake of time, a summary may be best instead of reading each and every condition. If a whiteboard or flipchart is used to capture ideas, take pictures of it to e-mail to the team and others. If a tool is used, then the chart can be saved and distributed in either an editable format or an image format.

Step 9 – Refine the Tests

The purpose of the TestStorming™ session is to gather ideas for testing and to get those ideas to the point of use as strong, effective tests.

The next steps occur after the session when test analysts take the great ideas and use them for robust test design.

In some cases, the output from a TestStorming™ session might be a good set of test conditions. In other cases, the output may be good ideas of things to test: much like you would have in a checklist or defect taxonomy.

Regardless of the detail obtained from the session, the ideas and/or conditions will then need to be formed into executable test cases with expected results. In some situations, you may not know enough about the application to define expected results. In that case, you will need to treat the TestStorming™ output as a place to start exploring the application.

Step 10 – Perform the Tests

Since the main objective is to collaborate on tests, the way we know whether they are effective tests is to actually perform them. The main determinant is not whether the tests pass or fail, but rather, do they exercise the application in a way that provides coverage, measures risk, and goes beyond simple functional tests?

Step 11 – Update the Tests as Needed

After performing the tests, you will have a feel for whether or not to include the tests in future cycles. Perhaps you may decide to build upon certain tests. You may also see where some of the tests need to be modified to be more effective in finding failures.


There is power in collaboration, and test design is an activity that benefits greatly from group interaction. TestStorming™ is where test design meets brainstorming to define creative and effective tests.

To be successful, you need a good leader, good participants and a way to capture the input from multiple people.

Thankfully, there are some great free tools available. The rest is up to you to put your team’s creativity to work!
Randy’s original blog can be found, here, on his company’s website.

Randy Rice

Randall (Randy) W. Rice is a thought-leading author, speaker and consultant in the field of software testing and software quality. Randy has over 35 years experience building and testing mission-critical projects in a variety of environments. He has worked in the roles of software developer, system designer, project manager, QA manager, test manager, management consultant and trainer. Randy is the founder of Rice Consulting Services, where he is Vice-president of Research and Development and principal consultant.
You can follow Randy on Twitter @rricetester.

Randall Rice
Randy Rice is a thought leading author, speaker and practitioner consultant in the field of software testing and software quality. Rice has worked with organizations worldwide to improve the quality of their information systems and optimize their testing processes.

The Related Post

One of the most common challenges faced by business leaders is the lack of visibility into QA activities. QA leaders have a tough time communicating the impact, value, and ROI of testing to the executives in a way that they can understand. Traditional reporting practices often fail to paint the full picture and do not ...
Introduction Software Testing 3.0 is a strategic end-to-end framework for change based upon a strategy to drive testing activities, tool selection, and people development that finally delivers on the promise of Software Testing. For more details on the evolution of Software Testing and Software Testing 3.0 see: The Early Evolution of Software Testing Software Testing ...
First, let me ask you a few questions. Are your bugs often rejected? Are your bugs often assigned back to you and discussed back and forth to clarify information? Do your leaders or managers often complain about your bugs?
Reducing the pester of duplications in bug reporting. Both software Developers and Testers need to be able to clearly identify any ‘Bug’, via the ‘Title’ used for the ‘Bug Report’.
LogiGear Magazine March Issue 2018: Under Construction: Test Methods & Strategy
LogiGear Magazine March Testing Essentials Issue 2017
Most have probably heard the expression ‘less is more‘, or know of the ‘keep it simple and stupid‘ principle. These are general and well-accepted principles for design and architecture in general, and something that any software architect should aspire to. Similarly, Richard P. Gabriel (a major figure in the world of Lisp programming language, accomplished poet, and currently ...
At VISTACON 2011, Harry sat down with LogiGear Sr. VP, Michael Hackett, to discuss various training methodologies. Harry Robinson Harry Robinson is a Principal Software Design Engineer in Test (SDET) for Microsoft’s Bing team, with over twenty years of software development and testing experience at AT&T Bell Labs, HP, Microsoft, and Google, as well as ...
One of the most dreaded kinds of bugs are the ones caused by fixes of other bugs or by code changes due to feature requests. I like to call these the ‘bonus bugs,’ since they come on top on the bug load you already have to deal with. Bonus bugs are the major rationale for ...
From cross-device testing, to regression testing, to load testing, to data-driven testing, check out the types of testing that are suitable for Test Automation. Scene: Interior QA Department. Engineering is preparing for a final product launch with a deadline that is 12 weeks away. In 6 weeks, there will be a 1 week quality gate, ...
The key factors for success when executing your vision.   There is an often cited quote: “…unless an organization sees that its task is to lead change, that organization—whether a business, a university, or a hospital—will not survive. In a period of rapid structural change the only organizations that survive are the ‘change leaders.’” —Peter ...
Alexa Voice Service (AVS): Amazon’s service offering for a voice-controlled AI assistant. Offered in different products. Source: https://whatis.techtarget.com/definition/Alexa-Voice-Services-AVS Autopilot Short for “automatic pilot,” a device for keeping an aircraft on a set course without the intervention of the pilot. Source: https://en.oxforddictionaries.com/definition/us/automatic_pilot Blockchain Infrastructure: A complex, decentralized architecture that orchestrates many systems running asynchronously over the ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news