Testing in Agile Part 3: Practices and Process

Summary

Remember that Agile is not an SDLC. Neither are Scrum and XP for that matter. Instead, these are frameworks for projects; they are built from practices (for example, XP has 12 core practices). Scrum and XP advocates will freely recommend that you pick a few practices to implement, then keep what works and discard what doesn’t, optimize practices and add more. But be prepared: picking and choosing practices might lead to new bottlenecks and will point out weaknesses. This is why we need to do continuous improvement!

That being said, there are some fundamental practices particular to test teams that should be implemented; if not, your chances of success with agile could be doomed.

Just by being aware of possible, even probable, pitfalls or even by implementing the practices most important to traditional testers does not guarantee agile success and separately testing success, though it should avoid a complete collapse of a product or team — or potentially as harmful, a finger-pointing game.

This 3rd part of our Testing in Agile series focuses on the impact to testers and test teams on projects implementing agile, XP, and Scrum. In this installment, we’ll only focus on the practices that are the most important to software testers.

And, in the final analysis, even if you implement all the important processes — and implement them well — you need to review how successful they are in your retrospective and keep what works then modify, optimize, and change what doesn’t work. Your scrum master or scrum coach helps with this.

Practices

In this section, we will focus mainly on XP practices, which are the development practices, rather than on the Scrum/project management practices, since it is the development practices that effect test teams most.

To be blunt, if you’re developers are not unit testing, and the team does not have an automated build process, the team’s success in agile will be limited at best.

Unit Testing / TDD and Automated User Story Acceptance Testing

Unit testing and test-driven development are so fundamental to agile that if your team is not unit testing, I cannot see how testers could keep up with the rapid release of code and fast integration in agile projects. The burden of black and gray box testing in the absence of unit testing in very fast, 2-to-4 week sprints should frighten any sane person who is knowledgeable in software development. Your developers need to be unit testing most if not all their code in rapid, agile projects – there is no way around it.

Automated user-story acceptance testing is secondary to unit testing in importance. This test validates that the task or goal of the user story has been achieved rather than validating code as in a unit test. Having that kind of test automated and available to re-run on successive builds and releases will enable a test team to focus on more effective exploratory testing, error guessing, scenario, workflow, error testing, varieties of data and alternative path testing that unit testing and user-story validation tests rarely cover. This leads to finding better bugs earlier and releasing higher quality software.

100% developer unit testing is one of the most significant advancements within the agile development process. We all know this does not guarantee quality, but it is a big step toward improving the quality of any software product.

From our Testing State-of-the-Practice Survey that we conducted on logigear.com, we can get a rough idea about how close we are to the ideal.

Here is one question regarding unit testing from our survey:

What percentage of code is being unit tested by developers before it gets released to the test group (approximately)?

total # of unit tests developed Percent answered
100% 13.60%
80% 27.30%
50% 31.50%
20% 9.10%
0% 4.50%
No idea 13.60%

* this was out of 100 respondents

A vast majority of agile teams are unit testing their code, though only a fraction are testing all of it. It’s important to know that most agile purists recommend 100% unit testing for good reason. If there are problems with releases, integration, missed bugs, and scheduling, look first to increase the percentage of code unit tested!

Automated Build Practice And Build Validation Practice / Continuous Integration

With the need for speed, rapid releases, and very compressed development cycles, an automated build process is a no-brainer. This is not rocket science and not specific to Agile/XP. Continuous integration tools have been around for years; there are many of them and they are very straightforward to use. It is also common for test teams to take over the build process. Implementing an automatic build process by itself is a step forward, but a team will realize more significant gains if they add to automated builds with continuous integration.
Continuous Integration includes the following:

  • Automated build process
  • Re-running of unit tests
  • Smoke test/build verification
  • Regression test suite
  • Build report

The ability to have unit tests continually re-run has significant advantages:

  • It can help find integration bugs faster
  • Qualifying builds faster will free up tester time
  • Testing on the latest and greatest build will save everyone time.

The positives for continuous integration far outweigh any resistance to implementing them. Test teams can take on more responsibility here: they can be more in control of the testing process — on how many builds they take and when they take them — for starters.

Hardening Sprint / Regression Sprint/ Release Sprint

The most successful agile teams have implemented a sprint that, in effect, is specifically designed and tailored to just test, or quite simply a “testing sprint¨. Although this testing or integration sprint can go by many names, a regression sprint or hardening sprint are the most common types. Prior to releasing to the customer, usually, someone has to do security, performance, accessibility, usability, scalability, perhaps localization, or many other types of tests that are most effectively done once the product is fully integrated. In most cases, this is when end-to-end, workflow, user-scenario tests are done and when full regression suites are executed. It is a great bug-finding and confidence-building sprint. But! Its late into the development cycle. “Bugs¨ found here may go directly into the backlog for the next release or cause a feature to be pulled from a release.

Estimating And Planning Poker Includes Testers.

Testers participating in the sizing and estimating of user stories is very basic to agile success. A few times I have run across companies trying to scope, size and rank a backlog without test team input. This is a gigantic mistake. Let me tell a quick story.

I was at a company that was doing a good job at implementing scrum, which they had piloted across a few teams. They still had some learning to do but were still implementing processes — overall, doing a good job to start!

The group that had the toughest time adopting to scrum, though, was the testers. This was because in their previous SDLC, the test team was viewed as adversarial, composed mainly of outsiders. Some of those attitudes persisted to the point where the product owner (a former “Marketing person¨) excluded testers from the user-story sizing and the estimation process.

During a coaching session, I was reviewing some user stories with them. We were doing some preliminary sizing and doing a first pass, assigning only four estimates: small, medium, large, and extra large. (Note: it’s a great way to start. Some people call it grouping by shirt size to roughly estimate what can get done in a sprint.)

One certain story got sized as a large and another at medium. I picked those two stories out from my experience with their product and pointed out that the one story ranked as a large was a very straightforward test. Any tester knowledgeable about this area could do an effective test job pretty quickly. But this other user story they sized as a medium was a test nightmare! I quickly ran through a list of situations that had to be tested — cross-browser, data, errors, etc — all of those things that testers do! The team believed me and we proceeded to pull in the test engineer to review our results. The tester quickly said the same thing as I had and pointed to this as a sore point for testers. The stories would have been sized completely wrong for the sprint (as had been the problem for the previous test team) if the test team continued to be excluded from the sprint planning and playing poker session.

This does not mean that it would reduce the complexity (to develop or move the story from a large to a medium). But it would have moved the medium complexity to a large or even an extra large and this would have impacted testing! The lesson learned here is that the test team needed to be included in the planning game. Attitudes had to change or costly estimating mistakes would be made.

This practice is also crucial to the XP values of trust and respect! Sadly, in many situations I have seen testers excluded from the planning meetings and invariably it is always a trust problem! Deal with the trust and respect problem and get involved in the complete planning process!

Definition Of Done

We’re all used to milestone criteria, entrance criteria, and exit criteria in whatever SDLC we’re using. The term people on agile projects use that relates to milestone criteria is the definition of done. Most often, the problem with milestone criteria is that it is routinely ignored when schedules get tight. This often leads to frustration, bad quality decisions, and ill feelings.

I want to show a simple description of agile that will help us in the discussion.

We are all familiar with the traditional three points of the project triangle that every project must juggle: features (number of features, quality, stability, etc.), cost (resources), and schedule. Before agile, projects committed to feature delivery then would add people (cost) or push out dates (schedule) and sometimes release buggy features to meet whatever constraint project managers felt needed holding!
Agile is different.

In agile, the cost, namely the size/resources of the team, is fixed. We know that adding people to a project reduces efficiency (Agile Series Part 1); and, the schedule is fixed. Never extend the time of a sprint. What can change, and what is estimated, is the set of features. This leads us back to the definition of done.
What gets done, the user stories/features, gets released. What does not get done gets put into the backlog for the next sprint. Since sprints are so short, this is not as dramatic as pulling a feature from a quarterly release. The customer would have to wait another quarter to get that functionality. If a feature gets pulled from a sprint and put into the back log, it can be delivered just a few weeks later. How do we know what’s done? A primary value of XP:

  • Working software is the primary measure of progress.

The definition of done will change among various groups. There is no one definition, though it commonly includes the following at a minimum:

  • the ability to demonstrate functionality
  • complete /100% unit testing
  • zero priority 1 bugs
  • complete documentation

Many teams also include a demonstration of the user story or feature before it can be called done.

In the past, for most teams, it seemed like a nice idea to have a set of milestone criteria but it was routinely ignored. In agile projects, though, with rapid release, the risk of slipping on done milestone criteria could be catastrophic to the system. Done is a safety net for the entire scrum team and actually the entire organization.

In the past, many test teams had been the entrance and exit criteria police since, in later stages, milestone criteria are often based on testing, bugs and code freeze — items testers see and are responsible for reporting on. Now, it is the Scrum Master who enforces the “Done” criteria, not testers. It is much better to have the Scrum Master be the enforcer, rather than have testers act as naysayers and complainers! Every team needs a Scrum Master!

Small Releases

  • Deliver working software frequently.

In Scrum it is recommended that sprints last from 2 to 4 weeks maximum. The practice of small iterative releases is the very core of agile development. I have seen companies rename their quarterly release a sprint and say: “we’re agile!” No.

A three month sprint is not a sprint at all. Sprints are meant to be very narrow in focus, able to demonstrate functionality before moving on to the next sprint, and have a prioritized and realistic backlog. These among many reasons should keep your iterations short. Some companies have begun agile implementations with four-week sprints and a plan to reduce the sprint time to three or two weeks over a year, after some successful releases and retrospectives with process improvement. Ken Schwaber and Jeff Sutherland, the original presenters of Scrum recommend beginning with a 2 week sprint.

Measure Burndown And Velocity

I have brought up the phrase sustainable pace and burndown charts a few times. Let’s briefly discuss these practices.
First, two guiding ideas:

  • We have to work at a sustainable pace. Crazy long hours and overtime lead quickly to job dissatisfaction and low quality output (see peopleware definition on Wikipedia) — the main way we get an idea of sustainable pace is through measuring burndown.
  • Burndown charts are one of the very few scrum artifacts.

The only scrum artifacts are the product backlog, the sprint backlog, and the burndown chart. Velocity is not by the book scrum but we will talk about this as well.
To briefly describe a burndown chart, here are some points about them and their usage:

  1. They measure the work the team has remaining in a sprint and whether they can get done with its planned work
  2. They quickly alert you to production risks and failed sprint risks
  3. They alert you to potential needs to re-prioritize tasks or move something to the backlog
  4. They can be used during a sprint retrospective to assess the estimating process and in many cases the need for some skill building around estimating the line

To have healthy teams and high quality products, people need to work at a sustainable pace. View this time using burndown charts and velocity.

Burndown charts count the total number of hours of work remaining in a sprint on the y axis against the day-by-day total on the x-axis.

Velocity measures the estimated total of successfully delivered user stories or functionality of backlog items. If this is measured over many sprints, a stable, predictable number should emerge.

Velocity can be used to gauge realistic expectations by both “chickens and pigs” for future purposes. Velocity is measured in the same units as feature estimates, whether this is “story points”, “days”, “ideal days”, or “hours” – all of which are considered acceptable. In simple terms, velocity in an agile world is the amount of work that you can do in each of the iterations. This is based on experience from previous iterations. The Aberdeen Group, a IT research firm, which has covered and published material on Agile Development, makes the claim that “When cost / benefit is measured in terms of the realization of management’s expectations within the constraints of the project timeline, it is the ability to control velocity that drives the return on investment.”

Measuring the burndown rate and calculating velocity will give you reasonable amounts of work for a team to do at a pace that is conducive to happy teams, releasing higher quality software. To repeat from the introduction piece to this series on Agile, “Teams working at a reasonable pace will release higher quality software and have much higher retention rates — all leading to higher quality and greater customer satisfaction.”

When the team feels really good about their abilities it encourages them to do better. The business starts to believe in the team and this sets the team up in the zone. When it gets into the zone, the team can generally sustain its steady-state velocity month after month without burning out. And better yet, they get to enjoy doing it. Geoffrey Bourne, who writes for Dr. Dobbs Journal notes, “The essence of creating a happy and productive team is to treat every member equally, respectfully and professionally.” He believes Agile promotes this ethos and I agree with him.

In conclusion, being agile is implementing practices to help product and development teams work most efficiently and be happy. There are many, many practices (again, XP has 12 core practices). Here, we discussed only the key practices for testing success. If your team is calling itself agile and has not implemented some of these practices, it is crucial to bring them up in sprint retrospectives and talk about their benefits and the problems that not doing them has caused in your product and customer.

There are other practices that need to be in place for success, and specifically for test teams to be successful and not be pointed out in a blame game. These are covered in other installments, namely:

  • You have to have a scrum master.
  • Automate, automate, automate.
  • Use sprint retrospectives for process improvement.
Michael Hackett

Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006).
He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

Michael Hackett
Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006). He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

The Related Post

These are the popular authentication methods in TestArchitect Authentication in API testing is usually a complicated subject for both developers and testers since it requires extensive knowledge on various types of security protocols and encryption algorithms.
 Understanding the pieces of the web service testing puzzle can make testing easier For people wanting a broader understanding of more pieces in the web service testing puzzle, here is a breakdown of the various possible components of an API.
APIs are subtly altering our expectations that there should be an app for everything. The concept of disruption has been given regal status across businesses, startups, and tech circles in recent years. With such great emphasis placed on change, user experiences are inevitably facing evolution as well. Application programming interfaces or APIs have great transformative powers to disrupt business, but are ...
A case-study using: Java, REST Assured, Postman, Tracks, Curl and HTTP Proxies This is the first book review I have written on my site. So of course I had to choose a great book that was relevant to my niche. Alan Richardson’s book on Automating & Testing a REST API fits the bill perfectly. I am a big ...
Lack of information and access to information isn’t an issue with web services. Web service documentation is widely available. Overview     One of the major persistent complaints from people who test is lack of information and lack of access to information. Luckily this is not the case with web services. If in the rare case with ...
An overview of web service testing solutions for traditional or non-technical testers. Much has been written on the technical execution of API tests, yet there are gaps in the details of what tests to design and how to design them. The articles tend to either get too technical too fast, or are too vague and ...
An API provides much of the functional capabilities in complex software systems. Most customers are accustomed to interacting with a graphical user interface on the computer. But, many do not realize that much of the functionality of a program comes from APIs in the operating system or the program’s dynamic-link libraries (DLL).
LogiGear_Magazine–June_2015–All_About_API_Testing
Here are some books you might find useful when developing your web services API testing strategy. The Art of Application Performance Testing by Ian Molyneaux — This book was just released and I found it an outstanding conceptual overview of performance testing a web based application. The book does a great job of reviewing the ...
Social APIs are omnipresent and create special cases for testing. If you understand API testing, especially web service type APIs, testing social APIs is easy to grasp. The use of social APIs makes them a special case. They are omnipresent and very well understood. What this means is you need to have a good understanding ...
API: An application programming interface (API) is a set of routines, protocols, and tools for building software applications. An API expresses a software component in terms of its operations, inputs, outputs, and underlying types. An API defines functionalities that are independent of their respective implementations, which allows definitions and implementations to vary without compromising the interface. Source: https://en.wikipedia.org/wiki/Application_programming_interface
API testing has long been misunderstood as well-confined in the territory of developers. It’s natural to think that we must write code to test our code. However, it doesn’t have to be that way anymore. Business testers who have deep domain knowledge are now able to take on the challenges of API testing without coding. ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe