How to Turn Your Software Testing Team into a High-Performance Organization

This article was adapted from a presentation titled “How to Turn Your Testing Team Into a High-Performance Organization” to be presented by Michael Hackett, LogiGear Vice President, Business Strategy and Operations, at the Software Test & Performance Conference 2006 at the Hyatt Regency Cambridge, Massachusetts (November 7 – 9, 2006).

Introduction

Testing is often looked upon by many as an unmanageable, unpredictable, unorganized practice with little structure. It is common to hear questions or complaints from development such as:

  • What is testing doing?
  • Testing takes too long
  • Testers have negative attitudes

Testers know that these complaints and questions are often unfair and untrue. Setting aside the development/testing debate, there can always be room for improvement. The first step in improving strategy and turning a test team into a higher performance test team is getting a grasp on where you are now! You want to know:

  • What testing is effective?
  • Are we testing the right things at the right time?
  • Do we need a staffing upgrade?
  • What training does our team need?
  • How does the product team value the test effort?

In this article we provide a framework for assessing your team, including: how to plan for an assessment, how to execute the assessment and judge your current performance, what to do with the information, and how to chart an improvement plan toward higher performance.

The Test Process Assessment

The goal of doing a test process assessment is to get a clear picture of what is going on in testing, the good things, the problems, and possible paths to improvement. Fundamentally, a test assessment is a data gathering process. To make effective decisions we need data about the current test process. If done properly; the assessment will probably cross many organizational and management boundaries.

It is important to note when embarking upon such an assessment that this effort is much larger than the test team alone. Issues will arise over who owns quality as well as what is the goal of testing? It is also important to note that a possible result of the assessment is that work may actually increase. There may be:

  • More demands for documentation
  • More metrics
  • More responsibility for communication and visibility into testing

For such an assessment process to succeed requires:

  • Executive sponsorship
  • A measurement program
  • Tools to support change
  • An acceptance of some level of risk
  • Avoidance of blaming testing for project-wide failures
  • Commitment about the goal of testing
  • An understanding of testing or quality assurance across the product team
  • Responsibility for quality

Components of a Test Strategy – SP3

A test strategy has three components that need to work together to produce an effective test effort. We have developed a model called SP3, based on a framework developed by Mitchell Levy of the Value Framework Institute. The strategies (S) components consist of:

  1. People (P1) – everyone on your team
  2. Process (P2) – the software development and test process
  3. Practice (P3) – the methods and tools your team employs to accomplish the testing task

Phase 1 Pre-Assessment Planning: The goals for this phase are to set expectations, plan the project, set a timeline, and obtain executive sponsorship. The actions that occur in phase 1 include meeting with the management of various groups, laying out expectations for the results of the process, describing the plan, and establishing a timeline. The intended result is to obtain agreement on expectations and buy-in on the assessment process and follow-up commitment for improvement. The phase 1 deliverable is a schedule and a project plan.

In phase 1 it is important to:

  • Get executive buy-in
  • Make a schedule and stick to it
  • Give a presentation of what you are doing, why and what you hope to get out of it
  • Make a statement of goals or outline of work as a commitment
  • Make a scope document a pre-approval/budget deliverable

It is important to note up front that assessment is only the beginning of the process.

Phase 2-Information Gathering: The goal of phase 2 is to develop interview questions and surveys which become the backbone of your findings. Actions in phase 2 include gathering documentation, developing interview questions, and developing a test team survey. The result of this phase is that you will be ready to begin your assessment using the documentation, interview questions, and test team survey. The deliverables include complete development process documentation, interview questions, and the tester survey.

Examples of the documentation to be collected include: SDLC documentation, engineering requirements documentation, testing documents (test plan templates and examples, test case templates and examples, status reports, and test summary reports). Interview questions need to cover a wide range of issues, including (but not limited to): the development process, test process, requirements, change control, automation, tool use, developer unit testing, opinions about the test team from other groups, expectation of the test effort, political problems, communication issues, and more.

Phase 3-Assessment: The goal of phase 3 is to conduct the interviews and develop preliminary findings. Actions include gathering and reviewing documentation, conducting interviews, sending out and collecting the surveys. As a result of this phase there will be a significant amount of material and information to review.

Phase 4-Post-Assessment: The goal of phase 4 is to synthesize all of the information into a list of findings. Actions include reviewing, collating, thinking, forming opinions, and making postulations. The result of this phase is that you will develop a list of findings from all of the gathered information, reviewed documentation, interviews, and the survey. The phase 4 deliverable is a list of findings, collated survey answers, collated interview responses, a staff assessment, and a test group maturity ranking.

The findings can be categorized into:

  • People
    • Technical skills
    • Interpersonal skills
  • Process
    • Documentation
    • Test process
    • SDLC
  • Practice
    • Strategy
    • Automation
    • Environment
    • Tools

More subcategories may also be developed to suit your needs.

Phase 5-Presentation of findings with project sponsor, executive sponsor and team: The goal of phase 5 is to present preliminary findings to executives and the project sponsor, and to obtain agreement on the highest priority improvement areas. It is important in this phase to be prepared for a very different interpretation of the findings than you perceived. The deliverable for phase 5 is an improvement roadmap.

Phase 6 -Implementation of Roadmap: The goal of phase 6 is to establish goals with timelines and milestones and sub tasks to accomplish the tasks agreed upon for improvement. The action of phase 6 is to develop a schedule for implementation of the improvement plan. It is helpful at this point to get some aspect of the project implemented immediately so people can see tangible results right away-even if they are the smallest or easiest improvement tasks. The deliverable for phase 6 is implementation of items in the roadmap for improvement according to the developed schedule.

Conclusion

A test strategy is a holistic plan that starts with a clear understanding of the core objective of testing, from which we derive a structure for testing by selecting from many testing styles and approaches available to help us meet our objectives. Performing an assessment helps to provide the “clear understanding” and “understanding of the core objective of testing”. Implementing the resulting roadmap for improvement can help to substantially improve the performance of your Software Testing organization and help to solidify your test strategy.

LogiGear Software Test & Performance Conference 2006 Presentations

Presentations to be delivered by LogiGear at the Software Test & Performance Conference 2006 include:

  • Wednesday, Nov. 8, 8:30 am to 10:00 am – “Effectively Training Your Offshore Test Team” by Michael Hackett
  • Wednesday, Nov. 8, 1:15 pm to 2:30 pm – “How to Optimize Your Web Testing Strategy” by Hung Q. Nguyen
  • Wednesday, Nov. 8, 3:00 pm to 4:15 pm – “Agile Test Development” by Hans Buwalda
  • Thursday, Nov. 9, 8:30 am to 10:00 am – “Strategies and Tactics for Global Test Automation, Part 1” by Hung Q. Nguyen
  • Thursday, Nov. 9, 10:30 am to 12:00 pm – “Strategies and Tactics for Global Test Automation, Part 2” by Hung Q. Nguyen
  • Thursday, Nov. 9, 2:00 pm to 3:15 pm – “The 5% Challenges of Test Automation” by Hans Buwalda

To register or for more information on STP CON, see: http://www.stpcon.com/

Michael Hackett

Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006).
He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

Michael Hackett
Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006). He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

The Related Post

Differences in interpretation of requirements and specifications by programmers and testers is a common source of bugs. For many, perhaps most, development teams the terms requirement and specification are used interchangeably with no detrimental effect. In everyday development conversations the terms are used synonymously, one is as likely to mean the “spec” as the “requirements.”
This article was developed from concepts in the book Global Software Test Automation: Discussion of Software Testing for Executives. Introduction When thinking of the types of Software Testing, many mistakenly equate the mechanism by which the testing is performed with types of Software Testing. The mechanism simply refers to whether you are using Manual or ...
D. Richard Kuhn – Computer Scientist, National Institute of Standards & Technology LogiGear: How did you get into software testing? What did you find interesting about it? Mr. Kuhn: About 10 years ago Dolores Wallace and I were investigating the causes of software failures in medical devices, using 15 years of data from the FDA. ...
March Issue 2020: Smarter Testing Strategies for The Modern SDLC
One of the most dreaded kinds of bugs are the ones caused by fixes of other bugs or by code changes due to feature requests. I like to call these the ‘bonus bugs,’ since they come on top on the bug load you already have to deal with. Bonus bugs are the major rationale for ...
Introduction All too often, senior management judges Software Testing success through the lens of potential cost savings. Test Automation and outsourcing are looked at as simple methods to reduce the costs of Software Testing; but, the sad truth is that simply automating or offshoring for the sake of automating or offshoring will only yield poor ...
“Combinatorial testing can detect hard-to-find software faults more efficiently than manual test case selection methods.” Developers of large data-intensive software often notice an interesting—though not surprising—phenomenon: When usage of an application jumps dramatically, components that have operated for months without trouble suddenly develop previously undetected errors. For example, newly added customers may have account records ...
This article was developed from concepts in the book Global Software Test Automation: Discussion of Software Testing for Executives. Introduction Many look upon Software Testing as a cost. While it is true that Software Testing does cost money, in many cases significant amounts of money, it is also an activity that helps an organization to ...
Experience-based recommendations to test the brains that drive the devices In essentially every embedded system there is some sort of product testing. Typically there is a list of product-level requirements (what the product does), and a set of tests designed to make sure the product works correctly. For many products there is also a set ...
This is an adaptation of a presentation entitled Software Testing 3.0 given by Hung Nguyen, LogiGear CEO, President, and Founder. The presentation was given as the keynote at the Spring 2007 STPCON conference in San Mateo, California. Watch for an upcoming whitepaper based on this topic. Introduction This first article of this two article series ...
Introduction This 2 article series describes activities that are central to successfully integrating application performance testing into an Agile process. The activities described here specifically target performance specialists who are new to the practice of fully integrating performance testing into an Agile or other iteratively-based process, though many of the concepts and considerations can be ...
This article was originally featured in the July/August 2009 issue of Better Software magazine. Read the entire issue or become a subscriber. People often quote Lord Kelvin: “I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot express ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe