Formulating a Software Test Strategy

This article was adapted from a presentation titled “How to Optimize Your Web Testing Strategy” to be presented by Hung Q. Nguyen, CEO and founder of LogiGear Corporation, at the Software Test & Performance Conference 2006 at the Hyatt Regency Cambridge, Massachusetts (November 7 – 9, 2006).
Click here to jump to more information on STP CON.

Introduction

The following article provides a brief overview of the process for formulating a software test strategy, the key things that need to be included, and the many critical questions that you should be asking yourself.

Formulating a Test Strategy

Some of the key things to remember when formulating a software test strategy are:

  1. It is teamwork, not something done by an individual or developed “on-high” and passed down to be implemented
  2. It requires all stakeholders to participate
  3. It requires executive support
  4. It requires that participants think outside-of-the-box––in essence, they should start with a blank piece of paper and not start the process with preconceived notions or approaches that represent the way things have always been done
  5. It requires a lot of asking, “Why?”
  6. It requires thinking from the bottom-up, and starting from the end

A formulated software test strategy should include many key things, including:

  1. Identifying different product development styles from inception through maintenance, so that we can eventually map the appropriate test strategy to each
  2. Mapping out phases, milestones, and relevant activities on a timeline
  3. Identifying the equivalent type of test strategies for each development method
  4. Prescribing what is involved in each test strategy

When undertaking the process of formulating a test strategy you should be asking yourself:

  1. What are your quality objectives or characteristics? Examples of quality objectives include: functionality, usability, performance, security, compatibility, scalability, and recovery.
  2. What are the requirements for each characteristic?
  3. What are the types of bugs that affect each quality characteristic?
  4. What are the test types or activities needed to support finding problems described in #3? These may include things such as: design reviews, code inspection/reviews, code walk through, design walk through, unit testing, API testing, external functional testing, usability testing, accessibility testing, configuration testing, compatibility testing, regression testing, performance testing, load testing, stress testing, failover/recovery testing, installation testing, security testing, and compliance testing.
  5. What are the most effective approaches to finding specific types of bugs as early as possible? Approaches may include: requirement-based testing, scenario-based testing, story-based testing, soap opera testing, model-based testing, attack-based testing, risk-based testing, fault injection, Diagnostic Approach to Software Testing (DAST), exploratory testing, and so on.
  6. What is the required application maturity to support #4?
  7. How would #5 and #6 be mapped to the various phases in the Software Development Life Cycle (SDLC)?
  8. How would you qualify the maturity of the software to determine that it has reached its milestone?
  9. How do you quantify and measure your work?
  10. What tools can help you improve your work and which framework is needed to implement the tool successfully?

The results of such a strategy formulation process can be:

  • A reduction in the number of missed bugs
  • Few or no missed critical bugs
  • Test Automation frameworks that are deployed for better visibility, maintainability and productivity

Conclusion

“Strategy without tactics is the slowest route to victory. Tactics without strategy is the noise before defeat.”.

– Sun Tzu

LogiGear Software Test & Performance Conference 2006 Presentations

Presentations to be delivered by LogiGear at the Software Test & Performance Conference 2006 include:

  • Wednesday, Nov. 8, 8:30 am to 10:00 am – “Effectively Training Your Offshore Test Team” by Michael Hackett
  • Wednesday, Nov. 8, 1:15 pm to 2:30 pm – “How to Optimize Your Web Testing Strategy” by Hung Q. Nguyen
  • Wednesday, Nov. 8, 3:00 pm to 4:15 pm – “Agile Test Development” by Hans Buwalda
  • Thursday, Nov. 9, 8:30 am to 10:00 am – “Strategies and Tactics for Global Test Automation, Part 1” by Hung Q. Nguyen
  • Thursday, Nov. 9, 10:30 am to 12:00 pm – “Strategies and Tactics for Global Test Automation, Part 2” by Hung Q. Nguyen
  • Thursday, Nov. 9, 2:00 pm to 3:15 pm – “The 5% Challenges of Test Automation” by Hans Buwalda

To register or for more information on STP CON, see: http://www.stpcon.com/

Hung Nguyen

Hung Nguyen co-founded LogiGear in 1994, and is responsible for the company’s strategic direction and executive business management. His passion and relentless focus on execution and results has been the driver for the company’s innovative approach to software testing, test automation, testing tool solutions and testing education programs.

Hung is co-author of the top-selling book in the software testing field, “Testing Computer Software,” (Wiley, 2nd ed. 1993) and other publications including, “Testing Applications on the Web,” (Wiley, 1st ed. 2001, 2nd ed. 2003), and “Global Software Test Automation,” (HappyAbout Publishing, 2006). His experience prior to LogiGear includes leadership roles in software development, quality, product and business management at Spinnaker, PowerUp, Electronic Arts and Palm Computing.

Hung holds a Bachelor of Science in Quality Assurance from Cogswell Polytechnical College, and completed a Stanford Graduate School of Business Executive Program.

Rob Pirozzi

Over 20 years of sales, marketing, management, and technology experience in high technology with exposure to industries including financial services, healthcare, higher education, government, and manufacturing; demonstrating a strong track record of success. Proven ability to build and maintain strong relationships, contribute to target organization success, and deliver results. Website: http://www.robpirozzi.com/

Hung Q. Nguyen
Hung Nguyen co-founded LogiGear in 1994, and is responsible for the company’s strategic direction and executive business management. His passion and relentless focus on execution and results has been the driver for the company’s innovative approach to software testing, test automation, testing tool solutions and testing education programs. Hung is co-author of the top-selling book in the software testing field, “Testing Computer Software,” (Wiley, 2nd ed. 1993) and other publications including, “Testing Applications on the Web,” (Wiley, 1st ed. 2001, 2nd ed. 2003), and “Global Software Test Automation,” (HappyAbout Publishing, 2006). His experience prior to LogiGear includes leadership roles in software development, quality, product and business management at Spinnaker, PowerUp, Electronic Arts and Palm Computing. Hung holds a Bachelor of Science in Quality Assurance from Cogswell Polytechnical College, and completed a Stanford Graduate School of Business Executive Program.
Hung Q. Nguyen on Linkedin
Rob Pirozzi
Over 20 years of sales, marketing, management, and technology experience in high technology with exposure to industries including financial services, healthcare, higher education, government, and manufacturing; demonstrating a strong track record of success.

The Related Post

Let’s look at a few distinctions between the two process improvement practices that make all the difference in their usefulness for making projects and job situations better! An extreme way to look at the goals of these practices is: what makes your work easier (retrospective) versus what did someone else decide is best practice (post-mortem)? ...
Introduction This 2 article series describes activities that are central to successfully integrating application performance testing into an Agile process. The activities described here specifically target performance specialists who are new to the practice of fully integrating performance testing into an Agile or other iteratively-based process, though many of the concepts and considerations can be ...
Introduction Software Testing 3.0 is a strategic end-to-end framework for change based upon a strategy to drive testing activities, tool selection, and people development that finally delivers on the promise of software testing. For more details on the evolution of software testing and Software Testing 3.0 see: Software Testing 3.0: Delivering on the Promise of ...
For mission-critical applications, it’s important to frequently develop, test, and deploy new features, while maintaining high quality. To guarantee top-notch quality, you must have the right testing approach, process, and tools in place.
Introduction All too often, senior management judges Software Testing success through the lens of potential cost savings. Test Automation and outsourcing are looked at as simple methods to reduce the costs of Software Testing; but, the sad truth is that simply automating or offshoring for the sake of automating or offshoring will only yield poor ...
Last week I went to StarWest as a presenter and as a track chair to introduce speakers. Being a track chair is wonderful because you get to interface more closely with other speakers. Anyway…one of the speakers I introduced was Jon Bach. Jon is a good public speaker, and I was pleasantly surprised that he ...
March Issue 2019: Leading the Charge with Better Test Methods
Internet-based per-use service models are turning things upside down in the software development industry, prompting rapid expansion in the development of some products and measurable reduction in others. (Gartner, August 2008) This global transition toward computing “in the Cloud” introduces a whole new level of challenge when it comes to software testing.
First, let me ask you a few questions. Are your bugs often rejected? Are your bugs often assigned back to you and discussed back and forth to clarify information? Do your leaders or managers often complain about your bugs?
Introduction Software Testing 3.0 is a strategic end-to-end framework for change based upon a strategy to drive testing activities, tool selection, and people development that finally delivers on the promise of Software Testing. For more details on the evolution of Software Testing and Software Testing 3.0 see: The Early Evolution of Software Testing Software Testing ...
Trying to understand why fails, errors, or warnings occur in your automated tests can be quite frustrating. TestArchitect relieves this pain.  Debugging blindly can be tedious work—especially when your test tool does most of its work through the user interface (UI). Moreover, bugs can sometimes be hard to replicate when single-stepping through a test procedure. ...
One of the most dreaded kinds of bugs are the ones caused by fixes of other bugs or by code changes due to feature requests. I like to call these the ‘bonus bugs,’ since they come on top on the bug load you already have to deal with. Bonus bugs are the major rationale for ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe