Janet Gregory draws from her own experience in helping agile teams address alternative ways to cope with roadblocks including projects without clear documentation, testers with limited domain knowledge and dealing with either black box or white box testing.
For testing on projects without clear documentation, is exploratory the only method? I often make “tester errors” on these projects. I might report wrong defects, or miss errors because I do not fully understand what to test. Have you come across this and if so, what methods would you recommend?
Unfortunately, many teams assume that the story is the only documentation they get and try to make the best of what they have, causing many misunderstandings to occur. A one sentence story is only one part of what is needed. Examples defining the expected behavior, plus one or two misbehaviors (what happens if), adds a lot of clarification.
However, it is the third piece that is absolutely necessary—the conversation that happens around the story and the examples (acceptance tests). If it is a complex story, you may even add a few extra examples to show alternative paths. I find that if I need too many examples to show the intent of the story, it likely should be broken up into more than one.
In the iteration planning session, I will start thinking about variations I need to test, and will write them down as they are discussed. These variations become the documentation as I automate them. I give these tests to the developer(s) who are coding the story, and we can collaborate even more to make sure we have a shared common understanding.
My exploratory testing can then concentrate on what I hadn’t thought about, or about risk areas that have been identified. If there are no serious issues, what I usually find, feeds into new stories or better understanding of the application.
How would you measure productivity and quality of an agile test team?
First, I’m not sure I’d measure productivity except maybe velocity, and that isn’t comparable between teams. Absolute numbers related to productivity is really difficult to capture. I would rather see teams monitor their trends—is their velocity increasing on average? Or decreasing?
One of the issues I hear the most is about interference or blocks from other teams. When you see a problem, measure it. For example, capturing how much time is spent on non-project specific tasks against the actual velocity of the team may capture a correlation between the two, so, the problem can be addressed.
The simplest measure of quality that I have used is “How many defects or issues are reported in production by our customers?” If we keep our internal development process clean and aim for zero known defects escaping each iteration, we hope that very few defects make it to production, and the number reported is low. Of course, if the customer isn’t using the system, this metric is irrelevant.
In a hypothetical worst-case scenario, an offshore team without exposure to the customer, possessing limited domain knowledge and receiving less-than-adequate requirements documents, what methods can test teams use?
This question can be taken one of two ways: either the developers are off-shore, or the testers are off-shore. I am assuming it is the developers who are off-shore and delivering code to testers after it is completed. There are a few things the tester and business users can do to help this situation. Ideally, some of the team from one location or the other have met face to face so they start building a relationship. In this worst case scenario, I’m guessing that didn’t happen.
First, I would suggest practicing ATDD (acceptance test driven development). Instead of handing over a story, hoping the programmers understand the nuances and make the right decisions, help them by giving them the tests you are going to run along with the story. The tests become documentation that help drive development.
The trick is to keep the tests generic enough that you are not telling them how to code, but instead, what needs to be delivered. Let’s use a simple example: If you are creating a login and password page, your acceptance tests may look something like this if written in a BDD (Behavioral) manner. [Note: validation rules are defined in wiki or someplace.]
Acceptance Tests (high level by customer)
- Expected Behavior: Given that I am an existing user, I enter the login page and enter a valid user name and password, then I am directed to the appropriate subsequent screen.
- Misbehavior: Given I am an existing user, I enter an invalid user name or password, I get an error message “Incorrect User Name or Password,” and then I am presented with blank fields to reenter the information.
Extended tests delivered to the programmer might be a simple list including combinations of valid user names and passwords and invalid ones depending on the defined validation rules. Some examples might be:
- Blank user name, valid password – error
- Valid user name, blank password – error
- Duplicate user names – error
- Duplicate passwords – valid
- Trailing blanks – valid (expect developers to trim the trailing blanks)
- Leading blanks – (do we want them trimmed or rejected?) worth asking the question
As testers, we want to help the programmers succeed.
In Data Warehousing or Business Intelligence (BI) projects, could we apply Agile? What are common difficulties while testing these projects? Any best test methods for these?
One of the most common difficulties in testing data warehousing projects is breaking up the features into testable chunks.
Teams sometimes need to shift their mindsets quite a bit to accomplish this.
Successful agile BI teams have realized that they can test ETLs individually ensuring row counts, totals, column widths and data types, where appropriate, remained consistent as the data moved through the ETL stream. These are areas that are suited for automation and can become part of the regression tests suite.
Other areas like consistency or accuracy of data quality cannot be automated and need to be addressed with tools like exploratory testing.
One of the difficulties can be getting feedback early from the customer. Is the data meeting their needs? The quality of the data is what becomes important in BI projects.
In general, how much do people employ user scenario testing as opposed to requirements validation and exploratory testing methods?
I’m not sure I’d differentiate the testing in that way. I don’t think that user scenario testing is “opposite” to either requirements validation (which is testing that the code is doing what we thought it should), or exploratory testing (which is more like testing our expectations). Scenario testing is checking that the workflows work which could be considered in either.
I think it is important to understand why you are testing, and not get hung up on the type you are doing. You might want to investigate the agile testing quadrants (several chapters in our book, or else Brian Marick’s blog http://www.exampler.com/old-blog/2003/08/21/#agile-testing-project-1 .
Features are broken up into multiple stories. I like to define acceptance tests at the feature level as well as the story level. These feature level tests usually look a lot like user workflows or scenarios. This makes it a different level of test.
When I practice ATDD, I define my tests up front (testing what
I can think of about the story – requirements), and collaborate with the programmers so we have a shared common understanding of what we are building. I have many tools I use to determine what tests I need. When I do my exploratory testing, I engage other skills, some which might include using my domain knowledge for testing various workflows.
Are there companies you’re aware of that use older test methods like cause-effect graphing or model-based testing (from black box, not white box)?
Cause-effect graphing is one method to determine what to test; there are many others methods as well —some are used more than others. I think testers use different tools to help them get adequate coverage and shouldn’t limit them to one type. Each feature/story may want you to try something different.
Model-based testing is an automation model to allow various flows through the system. I do know of organizations that have tried to model their legacy system for their automation rather than try to get complete coverage using static data. They have had different levels of success.
In most organizations that are doing on-going automation on each user story, they define their automation tests as they go, so I’m not seeing the model based testing as much.
With test and lifecycle management tools so common today dictating test design instead of test teams developing a design on their own, do you see test teams being limited by tool use?
It is easy to rely on tools to do our work for us, but no tool can think like the human brain. I would hope that testers see the limitations of the tool and challenge their own perceptions. One area where a tool cannot compete is in the area of exploratory testing.
When my team is cemented into “doing what we always do,” how can I introduce new test methods?
The first step is recognizing you are in a rut, which you have already done. Now, you need to get the rest of the team to want to try new ideas. One of the most powerful examples I have to share is where the QA Director in one organization I worked with, introduced weekly lunch and learns. She created a learning organization within her team.
Each week, she had one of the team members give a presentation on a new idea or introduce a new tool. It encouraged research by individuals to reach beyond what they already knew, gave opportunities for others to follow-up and try new things, and introduced the idea of sharing ideas and learning from each other.
An agile testing coach and practitioner, Janet Gregory is the co-author of Agile Testing: A Practical Guide for Testers and Agile Teams and a contributor to 97 Things Every Programmer Should Know. Janet specializes in showing agile teams how testers can add value in areas beyond critiquing the product; for example, by guiding development with business-facing tests. For the past ten years, Janet has been working with teams to transition to agile development, and teaches agile testing courses and tutorials worldwide. She enjoys sharing her experiences at conferences and user group meetings around the world, and was named one of the 13 Women of Influence in testing by Software Test & Performance magazine.
For more about Janet’s work, visit www.janetgregory.ca or visit her blog at janetgregory.blogspot.com.