How to Turn Your Software Testing Team into a High-Performance Organization

This article was adapted from a presentation titled “How to Turn Your Testing Team Into a High-Performance Organization” to be presented by Michael Hackett, LogiGear Vice President, Business Strategy and Operations, at the Software Test & Performance Conference 2006 at the Hyatt Regency Cambridge, Massachusetts (November 7 – 9, 2006).

Introduction

Testing is often looked upon by many as an unmanageable, unpredictable, unorganized practice with little structure. It is common to hear questions or complaints from development such as:

  • What is testing doing?
  • Testing takes too long
  • Testers have negative attitudes

Testers know that these complaints and questions are often unfair and untrue. Setting aside the development/testing debate, there can always be room for improvement. The first step in improving strategy and turning a test team into a higher performance test team is getting a grasp on where you are now! You want to know:

  • What testing is effective?
  • Are we testing the right things at the right time?
  • Do we need a staffing upgrade?
  • What training does our team need?
  • How does the product team value the test effort?

In this article we provide a framework for assessing your team, including: how to plan for an assessment, how to execute the assessment and judge your current performance, what to do with the information, and how to chart an improvement plan toward higher performance.

The Test Process Assessment

The goal of doing a test process assessment is to get a clear picture of what is going on in testing, the good things, the problems, and possible paths to improvement. Fundamentally, a test assessment is a data gathering process. To make effective decisions we need data about the current test process. If done properly; the assessment will probably cross many organizational and management boundaries.

It is important to note when embarking upon such an assessment that this effort is much larger than the test team alone. Issues will arise over who owns quality as well as what is the goal of testing? It is also important to note that a possible result of the assessment is that work may actually increase. There may be:

  • More demands for documentation
  • More metrics
  • More responsibility for communication and visibility into testing

For such an assessment process to succeed requires:

  • Executive sponsorship
  • A measurement program
  • Tools to support change
  • An acceptance of some level of risk
  • Avoidance of blaming testing for project-wide failures
  • Commitment about the goal of testing
  • An understanding of testing or quality assurance across the product team
  • Responsibility for quality

Components of a Test Strategy – SP3

A test strategy has three components that need to work together to produce an effective test effort. We have developed a model called SP3, based on a framework developed by Mitchell Levy of the Value Framework Institute. The strategies (S) components consist of:

  1. People (P1) – everyone on your team
  2. Process (P2) – the software development and test process
  3. Practice (P3) – the methods and tools your team employs to accomplish the testing task

Phase 1 Pre-Assessment Planning: The goals for this phase are to set expectations, plan the project, set a timeline, and obtain executive sponsorship. The actions that occur in phase 1 include meeting with the management of various groups, laying out expectations for the results of the process, describing the plan, and establishing a timeline. The intended result is to obtain agreement on expectations and buy-in on the assessment process and follow-up commitment for improvement. The phase 1 deliverable is a schedule and a project plan.

In phase 1 it is important to:

  • Get executive buy-in
  • Make a schedule and stick to it
  • Give a presentation of what you are doing, why and what you hope to get out of it
  • Make a statement of goals or outline of work as a commitment
  • Make a scope document a pre-approval/budget deliverable

It is important to note up front that assessment is only the beginning of the process.

Phase 2-Information Gathering: The goal of phase 2 is to develop interview questions and surveys which become the backbone of your findings. Actions in phase 2 include gathering documentation, developing interview questions, and developing a test team survey. The result of this phase is that you will be ready to begin your assessment using the documentation, interview questions, and test team survey. The deliverables include complete development process documentation, interview questions, and the tester survey.

Examples of the documentation to be collected include: SDLC documentation, engineering requirements documentation, testing documents (test plan templates and examples, test case templates and examples, status reports, and test summary reports). Interview questions need to cover a wide range of issues, including (but not limited to): the development process, test process, requirements, change control, automation, tool use, developer unit testing, opinions about the test team from other groups, expectation of the test effort, political problems, communication issues, and more.

Phase 3-Assessment: The goal of phase 3 is to conduct the interviews and develop preliminary findings. Actions include gathering and reviewing documentation, conducting interviews, sending out and collecting the surveys. As a result of this phase there will be a significant amount of material and information to review.

Phase 4-Post-Assessment: The goal of phase 4 is to synthesize all of the information into a list of findings. Actions include reviewing, collating, thinking, forming opinions, and making postulations. The result of this phase is that you will develop a list of findings from all of the gathered information, reviewed documentation, interviews, and the survey. The phase 4 deliverable is a list of findings, collated survey answers, collated interview responses, a staff assessment, and a test group maturity ranking.

The findings can be categorized into:

  • People
    • Technical skills
    • Interpersonal skills
  • Process
    • Documentation
    • Test process
    • SDLC
  • Practice
    • Strategy
    • Automation
    • Environment
    • Tools

More subcategories may also be developed to suit your needs.

Phase 5-Presentation of findings with project sponsor, executive sponsor and team: The goal of phase 5 is to present preliminary findings to executives and the project sponsor, and to obtain agreement on the highest priority improvement areas. It is important in this phase to be prepared for a very different interpretation of the findings than you perceived. The deliverable for phase 5 is an improvement roadmap.

Phase 6 -Implementation of Roadmap: The goal of phase 6 is to establish goals with timelines and milestones and sub tasks to accomplish the tasks agreed upon for improvement. The action of phase 6 is to develop a schedule for implementation of the improvement plan. It is helpful at this point to get some aspect of the project implemented immediately so people can see tangible results right away-even if they are the smallest or easiest improvement tasks. The deliverable for phase 6 is implementation of items in the roadmap for improvement according to the developed schedule.

Conclusion

A test strategy is a holistic plan that starts with a clear understanding of the core objective of testing, from which we derive a structure for testing by selecting from many testing styles and approaches available to help us meet our objectives. Performing an assessment helps to provide the “clear understanding” and “understanding of the core objective of testing”. Implementing the resulting roadmap for improvement can help to substantially improve the performance of your Software Testing organization and help to solidify your test strategy.

LogiGear Software Test & Performance Conference 2006 Presentations

Presentations to be delivered by LogiGear at the Software Test & Performance Conference 2006 include:

  • Wednesday, Nov. 8, 8:30 am to 10:00 am – “Effectively Training Your Offshore Test Team” by Michael Hackett
  • Wednesday, Nov. 8, 1:15 pm to 2:30 pm – “How to Optimize Your Web Testing Strategy” by Hung Q. Nguyen
  • Wednesday, Nov. 8, 3:00 pm to 4:15 pm – “Agile Test Development” by Hans Buwalda
  • Thursday, Nov. 9, 8:30 am to 10:00 am – “Strategies and Tactics for Global Test Automation, Part 1” by Hung Q. Nguyen
  • Thursday, Nov. 9, 10:30 am to 12:00 pm – “Strategies and Tactics for Global Test Automation, Part 2” by Hung Q. Nguyen
  • Thursday, Nov. 9, 2:00 pm to 3:15 pm – “The 5% Challenges of Test Automation” by Hans Buwalda

To register or for more information on STP CON, see: http://www.stpcon.com/

Michael Hackett

Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006).
He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

Michael Hackett
Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006). He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

The Related Post

Training has to be fun. Simple as that. To inspire changed behaviors and adoption of new practices, training has to be interesting, motivating, stimulating and challenging. Training also has to be engaging enough to maintain interest, as trainers today are forced to compete with handheld mobile devices, interruptions from texting, email distractions, and people who think they ...
Introduction All too often, senior management judges Software Testing success through the lens of potential cost savings. Test Automation and outsourcing are looked at as simple methods to reduce the costs of Software Testing; but, the sad truth is that simply automating or offshoring for the sake of automating or offshoring will only yield poor ...
Reducing the pester of duplications in bug reporting. Both software Developers and Testers need to be able to clearly identify any ‘Bug’, via the ‘Title’ used for the ‘Bug Report’.
At VISTACON 2011, Jane sat down with LogiGear Sr. VP, Michael Hackett, to discuss complex systems.
VISTACON 2010 – Keynote: The future of testing THE FUTURE OF TESTING BJ Rollison – Test Architect at Microsoft VISTACON 2010 – Keynote   BJ Rollison, Software Test Architect for Microsoft. Mr. Rollison started working for Microsoft in 1994, becoming one of the leading experts of test architecture and execution at Microsoft. He also teaches ...
One of the most dreaded kinds of bugs are the ones caused by fixes of other bugs or by code changes due to feature requests. I like to call these the ‘bonus bugs,’ since they come on top on the bug load you already have to deal with. Bonus bugs are the major rationale for ...
PWAs have the ability to transform the way people experience the web. There are a few things we can agree we have seen happen. The first being that we figured out the digital market from an application type perspective. Secondly, we have seen the rise of mobile, and lastly, the incredible transformation of web to ...
It’s a bird! It’s a plane! It’s a software defect of epic proportions.
LogiGear Magazine March Issue 2021: Metrics & Measurements: LogiGear’s Guide to QA Reporting and ROI
When You’re Out to Fix Bottlenecks, Be Sure You’re Able to Distinguish Them From System Failures and Slow Spots Bottlenecks are likely to be lurking in your application. Here’s how you as a performance tester can find them. This article first appeared in Software Test & Performance, May 2005. So you found an odd pattern ...
This article was developed from concepts in the book Global Software Test Automation: Discussion of Software Testing for Executives. Article Synopsis There are many misconceptions about Software Testing. This article deals with the 5 most common misconceptions about how Software Testing differs from other testing. Five Common Misconceptions Some of the most common misconceptions about ...
Internet-based per-use service models are turning things upside down in the software development industry, prompting rapid expansion in the development of some products and measurable reduction in others. (Gartner, August 2008) This global transition toward computing “in the Cloud” introduces a whole new level of challenge when it comes to software testing.

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe