An Explanation of Performance Testing on an Agile Team (Part 1 of 2)

Introduction

This 2 article series describes activities that are central to successfully integrating application performance testing into an Agile process. The activities described here specifically target performance specialists who are new to the practice of fully integrating performance testing into an Agile or other iteratively-based process, though many of the concepts and considerations can be valuable to any team member. Combined, the 2 articles will cover the following topics:

  • Introduction to Integrated Performance Testing on an Agile Team
  • Understand the Project Vision and Context
  • Identify Reasons for Testing Performance
  • Identify the Value Performance Testing Adds to the Project
  • Configure Test Environment
  • Identify and Coordinate Immediately Valuable Tactical Tasks
  • Execute Task(s)
  • Analyze Results and Report
  • Revisit Value and Criteria
  • Re-prioritize Tasks
  • Additional Considerations
  • Additional Resources

This first article in the series will cover the first 4 items (Introduction to Integrated Performance Testing on an Agile Team through Identify the Value Performance Testing Adds to the Project).

The keys to fully integrating performance testing into an Agile process are team-wide collaboration, effective communication, a commitment to adding value to the project with every task, and the flexibility to change focus. This article aims to provide the new performance specialist with the concepts and methods necessary to enable the team to reap the benefits of integrating performance testing into the Agile process without facing unacceptable risks.

Introduction to Integrated Performance Testing on an Agile Team

Because implementing an Agile philosophy implies different things to different teams, there is no single formula for integrating performance testing into an Agile process. Add to this the reality that effectively integrating performance testing into any development philosophy is difficult at best, and most teams decide that integrating performance testing into their Agile process is too hard or too risky to even attempt. Fortunately, performance testing is naturally iterative, making its integration into an Agile process highly effective when it works.

At a high level, performance testing within an Agile team loosely follows the flow depicted by the graphic below.

This flow embraces change and variable-length iterations within a project’s life cycle. Because the iteration goal is to deliver working code, it encourages planners to plan just far enough in advance to facilitate team coordination, but not so far ahead that the plan is likely to need significant revision to execute.

Additionally, an iteration may go over the same area of code and re-factor it several times. This means that in practice, any activity can happen at any moment in time, in any sequence, one or more at a time. One day the team might work on each activity several times in no discernible order, while the next 2 days might be spent entirely within a single activity—it is all about doing whatever can be accomplished right now to deliver working code at the end of the iteration, thus providing the greatest value to the project as a whole. The performance specialist’s challenge in this type of process is that they will most frequently be testing parts of the overall system instead of the overall completed system.

While the perspective of this article focuses on the activities that the performance specialist frequently drives or champions, this is neither an attempt to minimize the concept of team responsibility nor an attempt to segregate roles. The team is best served if the performance specialist is an integrated part of the team who participates in team practices such as pairing. Any sense of segregation is unintentional and a result of trying to simplify explanations.

Understand the Project Vision and Context

Project Vision

Even though the features, implementation, architecture, timeline, and environment(s) are likely to be fluid, the project is being conducted for a reason. Before tackling performance testing, ensure that you understand the current project vision. Revisit the vision document regularly, as it has the potential to change as well. Although everyone on the team should be thinking about performance, it is the performance specialist’s responsibility to be proactive in understanding and keeping up to date with the relevant details across the entire team. The following are some examples of high-level vision goals for a project:

  • Evaluate a new architecture for an existing system.
  • Develop a new custom system to solve business problem X.
  • Evaluate the new software development tools.
  • As a team, become proficient with a new language or technology.
  • Re-engineer an inadequate application before the “holiday rush” to avoid the same negative reaction “we got last year when the application failed.”

Project Context

The project context is nothing more than those circumstances and considerations that are, or may become, relevant to achieving the project vision. Some examples of items that may be relevant in your project context include:

  • Client expectations
  • Budget
  • Timeline
  • Staffing
  • Project environment
  • Management approach

Team members will often gain an initial understanding of these items during a project kickoff meeting, but the project’s contextual considerations should be revisited regularly throughout the project as more details become available and as the team learns more about the system they are developing.

Tips for the Performance Specialist

Understand the Project Management Environment

In terms of the project environment, the most important thing to understand is how the team is organized, how it operates, and how it communicates. Agile teams tend to use rapid communication and management methods rather than long-lasting documents and briefings, instead opting for daily stand-ups, story cards, and interactive discussions. Failure to identify and agree upon these methods at the outset can put performance testing behind before it begins. Asking questions similar to the following may be helpful:

  • Does the team have any meetings, stand-ups, or scrums scheduled?
  • How are issues raised or results reported?
  • If I need to collaborate with someone, should I send e-mail? Schedule a meeting? Use Instant Messenger? Walk over to their office?
  • Does this team employ a “do not disturb” protocol when an individual or sub-team desires “quiet time” to complete a particularly challenging task?
  • Who is authorized to update the project plan or project board?
  • How are tasks assigned and tracked? A software system? Story cards? Sign-ups?
  • How do I determine which builds I should focus on for performance testing? Daily builds? Friday builds? Builds with a special tag?
  • How do performance testing builds get promoted to the performance test environment?
  • Will the developers be writing performance unit tests? Can I pair with them periodically so we can share information?
  • How do you envision coordination for performance-testing tasks taking place?

Understand the Timeline and Build Schedule

Understanding the project build schedule is critical for a performance specialist. If you do not have a firm grasp of how and when builds are made, your performance testing will not only be perpetually behind schedule, but also will waste time conducting tests against builds that are not appropriate for the test being conducted. It is important that some person or artifact can communicate to you the anticipated sequence of deliveries, features, and/or hardware implementations that relate to the work you are doing far enough in advance for you to coordinate your tests. Because you will not be creating a formal performance test plan at the onset of the project, it is not important to concern yourself with dates, resources, or details sufficient for long range planning. What is important is that you have enough understanding of the anticipated time line and the immediate tasks at hand, and confidence that you understand the build process well enough to make good recommendations about what tests are most likely to add the greatest value at any particular point in time.

Understand the System

At this stage, you need to understand the intent of the system to be built, what is currently known or assumed about its hardware and software architecture, and the available information about the customer or user of the completed system. In addition, the performance specialist should be involved in decisions about the system and the architecture, making appropriate suggestions and raising performance-related concerns even before features or components are implemented.

With many Agile projects, the architecture and functionality of the system changes during the course of the project. This is to be expected. In fact, the performance testing you do is frequently the driver behind at least some of those changes. By keeping this in mind, you will neither over plan nor under plan performance-testing tasks in advance of starting them.

Identify Reasons for Testing Performance

Every project team has different reasons for deciding to include, or not include, performance testing as part of its process. Failure to identify and understand these reasons virtually guarantees that the performance-testing aspect of the project will not be as successful as it could have been. Examples of possible reasons for integrating performance testing as part of the project might include the following:

  • Improve performance unit testing by pairing with developers.
  • Assess and configure new hardware by pairing with administrators.
  • Evaluate algorithm efficiency.
  • Monitor resource usage trends.
  • Measure response times.
  • Collect data for scalability and capacity planning.

It is generally useful to identify the reasons for conducting performance testing very early in the project. These reasons are bound to change and/or shift priority as the project progresses, so you should revisit them regularly as you and your team learn more about the application, its performance, and the customer or user.

Tips for the Performance Specialist

The reasons for conducting performance testing equate to those considerations that will ultimately be used to judge the success of the performance-testing effort. A successful performance test involves not only the performance requirements, goals, and targets for the application, but also the reasons for conducting performance testing at all, including those that are financial or educational in nature. For example, criteria for evaluating the success of a performance-testing effort might include:

  • Identifying significant performance issues in the hardware and third-party software early in the project.
  • Performance team, developers, and administrators working together with minimal supervision to tune and determine the capacity of the architecture.
  • Conducting performance testing effectively without extending the duration or cost of the project.
  • Determining the most likely failure modes for the application under higher-than-expected load conditions.
  • Determining the number of users a particular configuration can support.
  • Determining the end-user response time under various conditions.
  • Validating that performance tests predict production performance within +/- 10%.

It is important to record and keep up to date the criteria that will ensure that performance testing is successful for your project, in a manner that is appropriate to your project’s standards and expectations. It is also valuable to maintain those criteria in a place where they are readily accessible to the entire team, whether that is in a document, team wiki, task-management system, or on story cards or a whiteboard is only important to the degree that it works for your team.

The initial determination of performance-testing success criteria can often be accomplished in a single work session, or possibly during the project kickoff. Remember that at this point you are articulating and recording success criteria for the performance-testing effort, not collecting performance goals and requirements for the application.

Some other information to consider when determining performance-testing success criteria include:

  • Exit criteria (how to know when you are done)
  • Key areas of investigation
  • Key data to be collected
  • Contractually binding performance requirements or Service Level Agreements (SLAs)

Identify the Value Performance Testing Adds to the Project

The value of performance testing is not limited to reporting the volumes and response times of a nearly completed application. Some other value-adds could include:

  • Helping developers create better performance unit and component tests
  • Helping administrators tune hardware and commercial, off-the-shelf software more efficiently
  • Validating the adequacy of network components
  • Collecting data for scalability and capacity planning
  • Providing resource consumption trends from build to build
Tips for the Performance Specialist

Once you have an understanding of the system, the project, and the performance-testing success criteria, the potential value that performance testing can add should start to become clear. You now have what you need to begin to conceptualize an overall strategy for performance testing. Whatever strategy you choose, it will be most effective when communicated with the entire team using a method that encourages feedback and discussion. Strategies should not contain excessive detail or narrative text. The point is that strategies are intended to help focus decisions, be readily available to the entire team, include a method for anyone to make notes or comments, and be easy to modify as the project progresses.

Although there is a wide range of information that could be included in the strategy, the critical components are the envisioned goals or outcomes of the test and the anticipated tasks to achieve that outcome. Other types of information that might be valuable to discuss with the team when preparing a performance test strategy for a performance build include:

  • The reason or intent for performance-testing this delivery
  • Prerequisites for strategy execution
  • Tools and scripts required
  • External resources required
  • Risks to accomplishing the strategy
  • Data of special interest
  • Areas of concern
  • Pass/Fail criteria
  • Completion criteria
  • Planned variants on tests
  • Load range
  • Tasks to accomplish the strategy

Conclusion

The keys to fully integrating performance testing into an Agile process are team-wide collaboration, effective communication, a commitment to adding value to the project with every task, and the flexibility to change focus. In this article we discussed:

  • Introduction to Integrated Performance Testing on an Agile Team
  • Understand the Project Vision and Context
  • Identify Reasons for Testing Performance
  • Identify the Value Performance Testing Adds to the Project

The second article in this series will go on to discuss:

  • Configure Test Environment
  • Identify and Coordinate Immediately Valuable Tactical Tasks
  • Execute Task(s)
  • Analyze Results and Report
  • Revisit Value and Criteria
  • Re-prioritize Tasks
  • Additional Considerations
  • Additional Resources
Scott Barber

Scott Barber is the CTO of PerfTestPlus, executive director of the Association for Software Testing (AST) and co-founder of the Workshop on Performance and Reliability (WOPR). A recognized expert in performance testing and analysis, he combines experience and a passion for solving performance problems with a context-driven approach the he sometimes calls a “scientific art” to produce accurate results. Scott is an international keynote speaker, trainer, consultant and writer of articles for a variety of publications. You can contact him at sbarber@perftestplus.com.

Scott Barber
Scott Barber is the CTO of PerfTestPlus, executive director of the Association for Software Testing (AST) and co-founder of the Workshop on Performance and Reliability (WOPR). A recognized expert in performance testing and analysis, he combines experience and a passion for solving performance problems with a context-driven approach the he sometimes calls a “scientific art” to produce accurate results. Scott is an international keynote speaker, trainer, consultant and writer of articles for a variety of publications. You can contact him at sbarber@perftestplus.com.

The Related Post

At VISTACON 2011, Harry sat down with LogiGear Sr. VP, Michael Hackett, to discuss various training methodologies. Harry Robinson Harry Robinson is a Principal Software Design Engineer in Test (SDET) for Microsoft’s Bing team, with over twenty years of software development and testing experience at AT&T Bell Labs, HP, Microsoft, and Google, as well as ...
Introduction Keyword-driven methodologies like Action Based Testing (ABT) are usually considered to be an Automation technique. They are commonly positioned as an advanced and practical alternative to other techniques like to “record & playback” or “scripting”.
Regardless of the method you choose, simply spending some time thinking about good test design before writing the first test case will have a very high payback down the line, both in the quality and the efficiency of the tests. Test design is the single biggest contributor to success in software testing and its also ...
Introduction Software Testing 3.0 is a strategic end-to-end framework for change based upon a strategy to drive testing activities, tool selection, and people development that finally delivers on the promise of software testing. For more details on the evolution of software testing and Software Testing 3.0 see: Software Testing 3.0: Delivering on the Promise of ...
Back from more training, I was up at a client in Bellevue and really enjoyed teaching a performance class to a world class testing organization. I found that the students were very receptive to many of the concepts and ideas that the class offers.
This article was adapted from a presentation titled “How to Optimize Your Web Testing Strategy” to be presented by Hung Q. Nguyen, CEO and founder of LogiGear Corporation, at the Software Test & Performance Conference 2006 at the Hyatt Regency Cambridge, Massachusetts (November 7 – 9, 2006). Click here to jump to more information on ...
Test organizations continue to undergo rapid transformation as demands grow for testing efficiencies. Functional test automation is often seen as a way to increase the overall efficiency of functional and system tests. How can a test organization stage itself for functional test automation before an investment in test automation has even been made? Further, how ...
This article was developed from concepts in the book Global Software Test Automation: Discussion of Software Testing for Executives. Introduction Metrics are the means by which the software quality can be measured; they give you confidence in the product. You may consider these product management indicators, which can be either quantitative or qualitative. They are ...
People rely on software more every year, so it’s critical to test it. But one thing that gets overlooked (that should be tested regularly) are smoke detectors. As the relatively young field of software quality engineering matures with all its emerging trends and terminology, software engineers often overlook that the software they test has parallels ...
“Combinatorial testing can detect hard-to-find software faults more efficiently than manual test case selection methods.” Developers of large data-intensive software often notice an interesting—though not surprising—phenomenon: When usage of an application jumps dramatically, components that have operated for months without trouble suddenly develop previously undetected errors. For example, newly added customers may have account records ...
VISTACON 2010 – Keynote: The future of testing THE FUTURE OF TESTING BJ Rollison – Test Architect at Microsoft VISTACON 2010 – Keynote   BJ Rollison, Software Test Architect for Microsoft. Mr. Rollison started working for Microsoft in 1994, becoming one of the leading experts of test architecture and execution at Microsoft. He also teaches ...
Jeff Offutt – Professor of Software Engineering in the Volgenau School of Information Technology at George Mason University – homepage – and editor-in-chief of Wiley’s journal of Software Testing, Verification and Reliability, LogiGear: How did you get into software testing? What do you find interesting about it? Professor Offutt: When I started college I didn’t ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe