Bonus Bugs

One of the most dreaded kinds of bugs are the ones caused by fixes of other bugs or by code changes due to feature requests. I like to call these the ‘bonus bugs,’ since they come on top on the bug load you already have to deal with.

Bonus bugs are the major rationale for regression testing in general and Test Automation in particular, since Test Automation is the best way to quickly retest an entire application after each round of code changes.

Since this is probably a ‘bonus’ you want to avoid, how do we prevent the bonus bugs from occurring, and how do we detect them when they have been introduced? I will give some notes here from the perspective of the developer, the tester and the manager respectively.

Let’s first talk about the developer. A developer can do quite a lot to reduce the chances of bonus bugs. Today’s systems are becoming more and more complex, and this complexity only increases over time as changes to the system are made. Any change can easily trigger a problem somewhere else, thus producing a bonus bug.

There is a lot written about commenting and documenting code, which I will not go into here, but whatever standard you adhere to (or are told to adhere to), make sure that somebody can easily “inherit” your code. It should take minimal energy for somebody to “decipher” and maintain the code you have written. Code should be written in small blocks each, of which starts with a meaningful comment. For example, if there is something that you want the next person to know about the code (e.g. some technical pitfall that you had to work around), state it explicitly in the code comments.

Another good policy is to have code changes reviewed and approved by either a peer programmer, or even better by a supervising “architect” who understands how the system is built up and what consequences of system changes could be.

From the point of view of the tester, there are two main items to worry about: test design and level of Automation.

Test design is one of the most underestimated topics in IT. Most tests that I encounter in companies and other organizations are “lame”; they simply follow the system requirements one-by-one and don’t even attempt to combine several different parts of the system functionalities with each other in creative ways that could reveal unexpected problems––like bonus bugs. Even though requirement based tests are useful, they have a low “ambition level,” and it can pay out to allocate time and resources to make more aggressive tests.

A high level of Test Automation will greatly enhance your capability to catch the bonus bugs before they reach the release. To get to such a high level, simply buying a test tool will not be enough. A well thought-out method of Test Automation, such as Keyword-Driven Testing, is essential, combined with training and coaching by experienced Test Automation experts.

Finally, a few words from the perspective of the manager: Here the recommendation is in fact quite simple: Make a determination on what bonus bugs can cost and what it is worth to prevent them. This is a business estimate and decision: having bonus bugs can cost money; efforts to prevent them cost money too. Effects of bonus bugs (or any other kind of bugs) can typically be loss of time before or after system release, and/or decreased appreciation by end-users of you and your company. Preventing bonus bugs takes extra time and money to follow policies and procedures for development and testing, which can include reviews of code and setting up a high level of Test Automation.

By understanding how and why bonus bugs get introduced into applications, we can both prevent them from being introduced, and find them when they are. This takes a combined effort of the developers, testers, and managers, and it’s a very important step in ensuring that your end-product satisfies your customers and other stakeholders.

Hans Buwalda

Hans leads LogiGear’s research and development of test automation solutions, and the delivery of advanced test automation consulting and engineering services. He is a pioneer of the keyword approach for software testing organizations, and he assists clients in strategic implementation of the Action Based Testing™ method throughout their testing organizations.

Hans is also the original architect of LogiGear’s TestArchitect™, the modular keyword-driven toolset for software test design, automation and management. Hans is an internationally recognized expert on test automation, test development and testing technology management. He is coauthor of Integrated Test Design and Automation (Addison Wesley, 2001), and speaks frequently at international testing conferences.

Hans holds a Master of Science in Computer Science from Free University, Amsterdam.

Hans Buwalda
Hans Buwalda, CTO of LogiGear, is a pioneer of the Action Based and Soap Opera methodologies of testing and automation, and lead developer of TestArchitect, LogiGear’s keyword-based toolset for software test design, automation and management. He is co-author of Integrated Test Design and Automation, and a frequent speaker at test conferences.

The Related Post

LogiGear Magazine March Testing Essentials Issue 2017
MARCH 2016_ TEST DESIGN ISSUE
Having developed software for nearly fifteen years, I remember the dark days before testing was all the rage and the large number of bugs that had to be arduously found and fixed manually. The next step was nervously releasing the code without the safety net of a test bed and having no idea if one ...
Introduction Keyword-driven testing is a software testing technique that separates much of the programming work of test automation from the actual test design. This allows tests to be developed earlier and makes the tests easier to maintain. Some key concepts in keyword driven testing include:
This article was developed from concepts in the book Global Software Test Automation: Discussion of Software Testing for Executives. Quality cost is the sum of all costs a company invests into the release of a quality product. When developing a software product, there are 4 types of quality costs: prevention costs, appraisal costs, internal failure ...
Test organizations continue to undergo rapid transformation as demands grow for testing efficiencies. Functional test automation is often seen as a way to increase the overall efficiency of functional and system tests. How can a test organization stage itself for functional test automation before an investment in test automation has even been made? Further, how ...
Differences in interpretation of requirements and specifications by programmers and testers is a common source of bugs. For many, perhaps most, development teams the terms requirement and specification are used interchangeably with no detrimental effect. In everyday development conversations the terms are used synonymously, one is as likely to mean the “spec” as the “requirements.”
Introduction Keyword-driven methodologies like Action Based Testing (ABT) are usually considered to be an Automation technique. They are commonly positioned as an advanced and practical alternative to other techniques like to “record & playback” or “scripting”.
Introduction Software Testing 3.0 is a strategic end-to-end framework for change based upon a strategy to drive testing activities, tool selection, and people development that finally delivers on the promise of Software Testing. For more details on the evolution of Software Testing and Software Testing 3.0 see: The Early Evolution of Software Testing Software Testing ...
Training has to be fun. Simple as that. To inspire changed behaviors and adoption of new practices, training has to be interesting, motivating, stimulating and challenging. Training also has to be engaging enough to maintain interest, as trainers today are forced to compete with handheld mobile devices, interruptions from texting, email distractions, and people who think they ...
Let’s look at a few distinctions between the two process improvement practices that make all the difference in their usefulness for making projects and job situations better! An extreme way to look at the goals of these practices is: what makes your work easier (retrospective) versus what did someone else decide is best practice (post-mortem)? ...
This article was adapted from a presentation titled “How to Optimize Your Web Testing Strategy” to be presented by Hung Q. Nguyen, CEO and founder of LogiGear Corporation, at the Software Test & Performance Conference 2006 at the Hyatt Regency Cambridge, Massachusetts (November 7 – 9, 2006). Click here to jump to more information on ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe