Technical Debt: a Nightmare for Testers

The sprint is almost over; the burn-down chart has not budged. The test team sits around waiting. They hear about all kinds of issues, obstacles and impediments at the daily stand-up but there is no code to test. Closing in on the demo and sprint review… then at Wednesday’s stand up: the heroes arrive and tell everyone,

“All the stories are done. Everything is in the new build. Test team – get to work! You have one day to test everything for this sprint, we will have an internal demo of everything tomorrow afternoon and a demo to the PO on Friday morning. Get busy!Sound familiar? Your team has just gone over the cliff into certain technical debt.

As organizations build more experience being Agile, some trends have emerged. Technical debt is one of these trends. That is not a good thing. Technical debt is a big topic and getting larger by the day. Much is even written just about what it is! There are definitions of debt far from the original definition with some definitions completely wrong.

Companies and teams struggle with technical debt concerning its: governance, management, documentation, communication, sizing and estimating, as well as tracking and prioritizing. Dealing with technical debt is difficult and new for most teams. There are dire predictions and warnings, and sadly – they are real. Some products, projects and teams have imploded from the weight of debt.
Like most concepts in Agile, technical debt can be used as a broad-brush classification, but here I will explore technical debt from just the testing perspective focusing on testers and their part in technical debt.

What is technical debt?

Technical debt has a large and growing definition. Before going any further, let’s look at the original definition of technical debt. First coined by Ward Cunningham, the financial metaphor referred only to refactoring.

Now people talk and write about technical debt using all sorts of financial jargon like good debt, bad debt, interest, principle, mortgage, futures and interest while losing track of the real problem. Resist this. Stay basic. It is key for any organization to have a good, agreed upon working definition of debt.

Technical debt happens when the team decides to “fix it later.” Anything we put off or postpone is considered debt. And it will come due with an interest payment. This is not to be confused with bugs that need to be fixed. Bugs are almost always associated with the function of the system, not testing tasks. Bugs are communicated, handled and managed differently. Technical debt is, as Joanna Rothman says, “what you owe the product,” such as missing unit tests and out of date database schemas – it’s not about bugs!

Think of the difference in technical debt and bugs as similar to the old discussion of ‘issues vs. bugs.’

You know you have debt when you start hearing things like:

  • “Don’t we have documentation on the file layouts?”
  • “I thought we had a test for that!”
  • “If I change X it is going to break Y….I think.”
  • “Don’t touch that code. The last time we did it took weeks to fix.”
  • “The server is down. Where are the backups?”
  • “Where is the email about that bug?”
  • “We can’t upgrade. No one understands the code.”

Andy Lester Get out of Technical Debt Now!

Now let’s look at the common causes and symptoms of technical debt so you can recognize when you are getting into a debt situation. This list has been gathered from a variety of sources, to provide a solid and broad understanding of the causes and symptoms:

  • Lack of test coverage.
  • Muddy or overly rigid content type definitions.
  • Hardcoded values.
  • Misused APIs.
  • Redundant code.
  • Inappropriate or misunderstood design patterns.
  • Brittle, missing or non-escalating error handling.
  • Unscalable software architectural design.
  • Foundational commitment to an abandoned platform.
  • Missing or inaccurate comments and documentation.
  • “Black box” components.
  • Third-party code that’s fallen far behind its public stable release.
  • Overly long classes, functions, control structures (cyclomatic complexity).
  • Clashing programming or software architectural styles within a single application.
  • Multiple or obscure configuration file languages.
  • Hardwired reliance on a specific platform or product (e.g., MySQL, Solaris, Apache httpd).

Matt Holford, Can Technical Debt Be Quantified? The Limits and Promise of the Metaphor

The problem

From reading the list of technical debt, it’s easy to see how products, processes and practices can get unnecessarily complicated and become slow, buggy and difficult to execute and manage. What follows is that the teams working on these systems spend more time dealing with systematic issues than developing new functionality which slows down the delivery of customer value. By the way, decreasing velocity is often one of the first signs a team is dealing with too much technical debt.

Technical debt happens and sometimes it is understandable. Software development happens over time. It’s not a nice, linear process. Very often things are not clear until the team attempts to actually build something. Problems and solutions unfold along with the project’s clarity and we all know that not everything can be planned for.

Let’s look at some reasons why this occurs:

  • User stories are too big.
  • The team did not fully understand the user story or it lacked acceptance criteria to better describe what was to be built.
  • Low estimating skill or consistently unrealistic estimates.
  • No use of spikes to better understand what is to be developed.
  • Team is too pressured to “get it done!”
  • Weak ScrumMaster or overbearing Product Owner.
  • Unexpected things happened.
  • Very short timeframes for sprints make teams rush and focus only on what must be done to get a release – at the exclusion of “good things to do.”
  • JIT (just-in-time) architecture or design.

Special concerns for Testers

1– Team attitudes about Testing

There are situations where debt builds from how the team handles testing, specifically for testers. Some teams are still under intense pressure to deliver on a fixed date. Regardless of the state of testing or findings from testing or test coverage, there is pressure on testers to “say it works.”

  • Some Agile basics, from XP (eXtreme Programming) need to be understood here. Working at a sustainable pace and respecting a team’s velocity are important.
  • When there is old style management (“chickens” dictating what has to be done to “pigs”) teams invariably have to cut corners and testing almost always gets crunched.
  • Sometimes, teams get into debt trouble with testing because testers were not included in user story estimation. The testing takes longer than expected; the team cuts corners and builds debt. And—there are always bugs! That is not the issue. It is the pressure to defer, minimize, or ignore that build debt.

Many of the original Scrum teams I worked with struggled with having cross-functional teams. Now that Scrum has been around for a few years, I see fewer companies attempting to have cross-functional teams.

When the Scrum Guide explains cross functional teams, the description promotes iterative design, refactoring, collaboration, cooperation, and communication but shuns handoff. All these things will reduce gaps and provide early, expanded testing communication and information, providing for more full understanding – all this will reduce technical debt.

Yet, the way Scrum has generally evolved promotes handoff and less collaboration and communication which will increase technical debt.

For integrated teams, this means sitting together, discussing, talking and refactoring. It means asking questions, driving the development by developing tests (TDD); it is absolutely iterative and full of refactoring. Anti-Agile is when developers work in isolation and handoff completed code to testers to validate and call done.

Handoff otherwise known as AgileFalls is a dirty word in Agile.

I was asked to help a company and found out, within the first half hour, that they had a programmer sprint, then a tester sprint. I said, “That sounds like waterfall.”

They totally misunderstood Scrum teams.

2 – The Cliff: a special Scrumbut situation

Testers still get time crunched. Back in the traditional software development days, test teams very often lost schedule time they had planned for. This continues as a common practice in the Agile world.

The following graphs allow you to visualize this situation.

The Crunch

Hans Buwalda has often used these diagrams to describe problematic software development projects. In the planning stage each phase or team gets their allotted time.

When it comes to testing reality, requirements are defined late, added late, the design was late or the code was late, testers get crunched on time so the team won’t slip the schedule.

The Cliff

A theoretical burndown chart has the same idea. User stories and “user story points” get moved from “In Development” to “In Testing” at a somewhat steady pace over time and are delivered over time – this is ideal.

The troubling phenomenon common to so many teams these days is the cliff. Testers wait and wait and, as the final days of the sprint approach, the bulk of user stories get dumped on them, with the expectation of full validation and testing as the sprint demo and review come up.

There is no way a test team can do an effective job at this point. Most teams in this situation, under pressure from product owners/customers/whomever, make up quick and dirty rules:

  • The story is “done but not tested.” (ScrumBut)
  • Test it in the next sprint while they wait for new functionality. (AgileFalls)
  • Break the story into 2 stories, the coding and the testing. The coding is done. (ScrumBut and AgileFalls)
  • Say it’s done and if the PO finds a bug during the demo we can write a new user story on that bug. (ScrumBut)

…And many more creative and flawed ways to “count the story points for velocity,” or say it’s done and build more technical debt.

There is so much wrong with these solutions, so much ScrumBut and AgileFalls combined, these situations need their own article on how to recognize and remediate these situations. We will discuss solutions to these problems later in the article, but for now, know that these are not normal, it’s not Scrum, they’re not good and they need to be resolved in sprint retrospectives.

3 – Specific to automation:

While many Agile teams take their development practices from XP (eXtreme Programming), such as TDD (test driven development); CI (continuous integration) or pair programming; sustainable pace; small releases and the planning game, there are many foundation practices that allow teams to achieve these goals. Specifically, unit test automation, user story acceptance criteria automation, high volume regression test automation and automated smoke tests (quick build acceptance tests for the continuous integration process).

Many test teams struggle with the need for speed in automating tests in Agile development. To create and automate tests quickly, some teams use unstructured record and playback methods resulting in “throw-away automation.” Throw-away automation is “quick and dirty,” typically suitable only for the current sprint and created with no intention of maintenance. Struggling teams will be resigned to throw away automation or do 100% manual testing during a sprint and automate what and if they can – 1 or 2 or 3 or more sprints after the production is written. Automation suites that lose relevance with new functional releases in Agile without enough time for maintenance, upgrade, infrastructure, or intelligent automation design are a drain on resources. From my experience test automation is rarely accounted for when product teams quantify technical debt. This is changing and needs to change more.

To remedy these problems, many teams are conducting test automation framework development in independent sprints from production code. Since automation is software, its development can be treated as a different development project, supporting the production code. Using this approach, automation code should have a code review, coding standards and its own testing; otherwise technical debt will accrue in the form of high maintenance costs.

I’ve always been amazed at the lack coding standards, design and bug finding work applied to automation code that is intended to verify the production code that is created with rigorous processes. I hope I’m not the only one who sees the shortsightedness in this.

4 – “Done but not done done.”

It could be said here, that any team using the phrase “done but not done done” is building debt just by saying it! There is a mess building up around the Definition of Done, the DoD. The Scrum Guide stresses that a team needs to have a clear Definition of Done, but it’s becoming obvious over time that teams don’t always have one.

The Definition of Done, for some teams, has morphed into the old waterfall style of “milestone criteria” and “entrance/exit criteria.” It’s nice to have, but it isn’t really enforced due to schedule constraints and product owner pressure to get some functionality out to the customer. This is a problem.

With small, often stressed sprint iterations, the pressure to call something done and move on to the next function to develop very often surpasses the pressure to get things done right. Corners get cut and things are skipped (this is the essence of technical debt!). A strong, enforced definition of done will help prevent technical debt. I have seen this case in many smooth-running Agile development environments. A great DoD is the foundation and it is strictly enforced. The phrase “done done” is never used. It is only ever done or not!

The definition of done must be agreed upon by the team. It is enforced by the ScrumMaster, not testers. This, by the way, is a great change in Agile. Many traditional-style development teams were dictated by the milestone police, sometimes creating ill-will as testers hold back milestones or feel trampled when milestones are passed without having actually been achieved. In Scrum, it is the Scrum Master’s job to determine done. But, since the tester provides much of the information, this is a great place for them to provide full communication about what is tested, what is automated, what works, what doesn’t, and any ancillary tasks that may not have been done. This is the core of risk recognition and risk communication.

Having a solid, universally known, adhered to “definition of done” — an explicit, agreed-upon, no exceptions, workable, meaningful, measurable definition of done – is essential.

5 – Test Documentation

When testing is not documented you create debt.

Agile practices have borrowed much from Lean Development. Lean means lean documentation, not no documentation! Document what is essential. Eliminate redundancy. Do not document merely to document or because there is a tool to document. Resist this. Document for knowledge transfer, repeatability, automation, correctness- good reasons.

6 – A special reminder on Risk

Testers have to be especially skilled in all aspects of risk management. Technical debt sometimes sneaks up on teams because individuals do not recognize it or don’t effectively communicate it. Some of the most important skill sets testers can build are risk recognition, assessment and communication. Recognizing and communicating risk should be as lucid and pragmatic as possible as to prevent debt or to provide for a better risk analysis before accruing debt.

Preventions and Solutions

  • For the whole Team, prevent debt by implementing these great recommendations from Ted Theodoropoulos:
  • Document system architectures.
  • Provide adequate code coverage in QA processes.
  • Implement standards and conventions.
  • Properly understand the technologies leveraged.
  • Properly understand the processes the technology is designed to support.
  • Refactor code base or architecture to meet changing needs.

Technical Debt, by Ted Theodoropoulos

Prevent technical debt by investing heavily in test automation. It is an undeniable truth that the more you invest in the design and code for test automation, the bigger long term payback you will have from it. Shortcuts and sloppy test automation will wind up costing your team more in the immediate term and significantly more over time.

What is becoming clearer to many teams is the similarity of technical debt to financial debt and that ignoring it will result in the worst possible outcome! Debt gets bigger! Recognize it. State it. Write it down. Communicate it. Make the debt visible. Keep an explicit debt list. Developer Alex Pukinskis suggests teams use florescent pink index cards on the Team board/kanban board for debt.

  • Service the debt. If you accumulate too much debt, you spend all your income/resources paying down the interest and not the capital. Get a great definition of technical debt. Have it agreed upon.
  • Use spikes to better analyze user stories that are not well understood or teams are hesitant to give estimates.
  • Pay attention to velocity! When velocity drops, the team is bogged down or not getting stories “done.”
  • Prevent debt by improving estimating skills and learning from mistakes

A few scrum basics and reminders:

  • The PO accepts a story as done. If there are development, processes or testing shortcuts taken, point them out.
  • The Scrum Master must be an expert on the scrum process:
  • Be the scrum police.
  • Remove obstacles and stick to the rules.
  • Keep chickens out of the pig’s way.
  • If Scrum rules are being routinely broken or compromised, the Scrum Master has responsibility to fix it.
  • Accurately measure burndown and velocity.
  • Testers and the whole team need to use sprint retrospectives. Where processes are not followed or break – recognize it. Report it, communicate it effectively.
  • Chickens and pigs – Chickens need to be made more aware of velocity and the realities about what can’t get done.
  • A story is not done if there are bugs or issues to resolve.
  • Don’t split stories. Have a great definition of done.
  • Enforced by Scrum Master, not testers. Remove the politics of testers being “no” people.
  • Use hardening or regression sprints for integration.

Summary

Technical Debt is inevitable but is not all bad. It has to be communicated, managed and serviced! When teams are chronically in debt, the issues that brought it up need to be recognized, communicated, and hopefully resolved.

Test teams play a special role in certain types of debt. Test teams should be especially aware to not create more debt by compromised test automation.

Test teams can greatly help the team thru recognizing and communicating debt issues as they arise. They need to document intelligently, not document everything.

Learning more about Scrum processes, or whatever lifecycle processes your team follows, can be a big benefit in preventing and dealing with debt.

His clients have included Palm Computing, Oracle, CNET, Electronics for Imaging, The Learning Company, and PC World.

Prior to co-founding LogiGear Michael managed QA teams at The Well, Adobe Systems, and PowerUp Software. He holds a Bachelor of Science in Engineering from Carnegie-Mellon University.

Michael Hackett
Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006). He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

The Related Post

One of the features of using Agile methods is the opportunity for continuous improvement within a project. There are a number of improvement opportunities throughout a typical iteration or sprint─over the next few weeks I’m going to walk through a few, starting this week with the Retrospective. Retrospectives are one of the many tools in ...
This is part 2 of a 2-part article series; part 1 was featured in the September 2020 issue of the LogiGear Magazine, and you can check it out here. Part 1 discussed the mindset required for Agile, as well as explored the various quadrants of the Agile Testing Quadrants model. Part 2 will delve into ...
As CTO of Xebia and highly experienced in offshore testing in India, Guido articulates his methods in addressing common challenges faced by the in-house and offshore teams. He weighs heavily on strategic tactics as well as key cultural aspects to execute efficient and effective Agile methods. 1. I work at a US-based company and we ...
How to fit automated testing into scrum, and keep testers in sync with other teams One of the benefits of the approaches of agile projects is their friendliness towards testing. The testing activities, and the testers with it, are integrated into the teams, and testing and quality are redefined as team responsibilities. Automation nowadays is a must-have ...
Janet Gregory draws from her own experience in helping agile teams address alternative ways to cope with roadblocks including projects without clear documentation, testers with limited domain knowledge and dealing with either black box or white box testing. For testing on projects without clear documentation, is exploratory the only method? I often make “tester errors” ...
Video narrated by MICHAEL HACKETT – Certified ScrumMaster This is Part Two Continued of a Four Part Video on “New Roles for Traditional Testers in Agile Development” Michael shares his thoughts on “A Primer – New Roles for Traditional Testers in Agile” LogiGear Corporation LogiGear Corporation LogiGear Corporation provides global solutions for software testing, and ...
Agile stresses instant and easy communication and is built on teams working efficiently together. This necessitates an open work space environment. A characteristic of an effective team is a high level of collaboration, making the physical work environment an important factor. Cubicles should be eliminated in favor of an open work space in an effort ...
Video narrated by MICHAEL HACKETT – Certified ScrumMaster This is Part Three of a Four Part Video on “New Roles for Traditional Testers in Agile Development” Michael shares his thoughts on “A Primer – New Roles for Traditional Testers in Agile”   LogiGear Corporation  LogiGear Corporation LogiGear Corporation provides global solutions for software testing, and ...
Our comprehensive issue on Agile, which was set to be released in June, has been moved to early July. We’ve made this decision in order to accommodate an article from one of our industry’s thought leaders. We’re really excited about this piece and we’re sure you will be too! LogiGear Magazine is dedicated to bringing ...
Agile methods were developed as a response to the issues that waterfall and V-model methodologies had with defining requirements and delivering a product that turned out to be not what the end user actually wanted and needed. From www.agiletesting.com.au A software tester’s role in traditional software development methodology, a.k.a waterfall & the V-model can be ...
Video narrated by MICHAEL HACKETT – Certified ScrumMaster This is Part Two of a Four Part Video on “New Roles for Traditional Testers in Agile Development” Michael shares his thoughts on “A Primer – New Roles for Traditional Testers in Agile”  LogiGear Corporation LogiGear Corporation LogiGear Corporation provides global solutions for software testing, and offers ...
The No-Nonsense Guide for How to Write Smarter and Low Maintenance Test Cases Test design is a phrase that is often used when planning testing and test efforts, but I do not believe it is well understood. Also, opinions vary widely about the importance of test design ranging from irrelevant to the crucial ingredient for ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe