Bonus Bugs

One of the most dreaded kinds of bugs are the ones caused by fixes of other bugs or by code changes due to feature requests. I like to call these the ‘bonus bugs,’ since they come on top on the bug load you already have to deal with.

Bonus bugs are the major rationale for regression testing in general and Test Automation in particular, since Test Automation is the best way to quickly retest an entire application after each round of code changes.

Since this is probably a ‘bonus’ you want to avoid, how do we prevent the bonus bugs from occurring, and how do we detect them when they have been introduced? I will give some notes here from the perspective of the developer, the tester and the manager respectively.

Let’s first talk about the developer. A developer can do quite a lot to reduce the chances of bonus bugs. Today’s systems are becoming more and more complex, and this complexity only increases over time as changes to the system are made. Any change can easily trigger a problem somewhere else, thus producing a bonus bug.

There is a lot written about commenting and documenting code, which I will not go into here, but whatever standard you adhere to (or are told to adhere to), make sure that somebody can easily “inherit” your code. It should take minimal energy for somebody to “decipher” and maintain the code you have written. Code should be written in small blocks each, of which starts with a meaningful comment. For example, if there is something that you want the next person to know about the code (e.g. some technical pitfall that you had to work around), state it explicitly in the code comments.

Another good policy is to have code changes reviewed and approved by either a peer programmer, or even better by a supervising “architect” who understands how the system is built up and what consequences of system changes could be.

From the point of view of the tester, there are two main items to worry about: test design and level of Automation.

Test design is one of the most underestimated topics in IT. Most tests that I encounter in companies and other organizations are “lame”; they simply follow the system requirements one-by-one and don’t even attempt to combine several different parts of the system functionalities with each other in creative ways that could reveal unexpected problems––like bonus bugs. Even though requirement based tests are useful, they have a low “ambition level,” and it can pay out to allocate time and resources to make more aggressive tests.

A high level of Test Automation will greatly enhance your capability to catch the bonus bugs before they reach the release. To get to such a high level, simply buying a test tool will not be enough. A well thought-out method of Test Automation, such as Keyword-Driven Testing, is essential, combined with training and coaching by experienced Test Automation experts.

Finally, a few words from the perspective of the manager: Here the recommendation is in fact quite simple: Make a determination on what bonus bugs can cost and what it is worth to prevent them. This is a business estimate and decision: having bonus bugs can cost money; efforts to prevent them cost money too. Effects of bonus bugs (or any other kind of bugs) can typically be loss of time before or after system release, and/or decreased appreciation by end-users of you and your company. Preventing bonus bugs takes extra time and money to follow policies and procedures for development and testing, which can include reviews of code and setting up a high level of Test Automation.

By understanding how and why bonus bugs get introduced into applications, we can both prevent them from being introduced, and find them when they are. This takes a combined effort of the developers, testers, and managers, and it’s a very important step in ensuring that your end-product satisfies your customers and other stakeholders.

Hans Buwalda

Hans leads LogiGear’s research and development of test automation solutions, and the delivery of advanced test automation consulting and engineering services. He is a pioneer of the keyword approach for software testing organizations, and he assists clients in strategic implementation of the Action Based Testing™ method throughout their testing organizations.

Hans is also the original architect of LogiGear’s TestArchitect™, the modular keyword-driven toolset for software test design, automation and management. Hans is an internationally recognized expert on test automation, test development and testing technology management. He is coauthor of Integrated Test Design and Automation (Addison Wesley, 2001), and speaks frequently at international testing conferences.

Hans holds a Master of Science in Computer Science from Free University, Amsterdam.

Hans Buwalda
Hans Buwalda, CTO of LogiGear, is a pioneer of the Action Based and Soap Opera methodologies of testing and automation, and lead developer of TestArchitect, LogiGear’s keyword-based toolset for software test design, automation and management. He is co-author of Integrated Test Design and Automation, and a frequent speaker at test conferences.

The Related Post

They’ve done it again. Gojko Adzic, David Evans and, in this book, Tom Roden, have written another ‘50 Quick Ideas’ book. And this one is equally as good as the previous book on user stories. If not even better.  
Most have probably heard the expression ‘less is more‘, or know of the ‘keep it simple and stupid‘ principle. These are general and well-accepted principles for design and architecture in general, and something that any software architect should aspire to. Similarly, Richard P. Gabriel (a major figure in the world of Lisp programming language, accomplished poet, and currently ...
Reducing the pester of duplications in bug reporting. Both software Developers and Testers need to be able to clearly identify any ‘Bug’, via the ‘Title’ used for the ‘Bug Report’.
Please note: This article was adapted from a blog posting in Karen N. Johnson’s blog on July 24, 2007. Introduction The password field is one data entry field that needs special attention when testing an application. The password field can be important (since accessing someone’s account can start a security leak), testers should spend more ...
Introduction Software Testing 3.0 is a strategic end-to-end framework for change based upon a strategy to drive testing activities, tool selection, and people development that finally delivers on the promise of Software Testing. For more details on the evolution of Software Testing and Software Testing 3.0 see: The Early Evolution of Software Testing Software Testing ...
One of the most common challenges faced by business leaders is the lack of visibility into QA activities. QA leaders have a tough time communicating the impact, value, and ROI of testing to the executives in a way that they can understand. Traditional reporting practices often fail to paint the full picture and do not ...
This article was developed from concepts in the book Global Software Test Automation: Discussion of Software Testing for Executives. Article Synopsis There are many misconceptions about Software Testing. This article deals with the 5 most common misconceptions about how Software Testing differs from other testing. Five Common Misconceptions Some of the most common misconceptions about ...
This article was developed from concepts in the book Global Software Test Automation: Discussion of Software Testing for Executives. Introduction When thinking of the types of Software Testing, many mistakenly equate the mechanism by which the testing is performed with types of Software Testing. The mechanism simply refers to whether you are using Manual or ...
Alexa Voice Service (AVS): Amazon’s service offering for a voice-controlled AI assistant. Offered in different products. Source: https://whatis.techtarget.com/definition/Alexa-Voice-Services-AVS Autopilot Short for “automatic pilot,” a device for keeping an aircraft on a set course without the intervention of the pilot. Source: https://en.oxforddictionaries.com/definition/us/automatic_pilot Blockchain Infrastructure: A complex, decentralized architecture that orchestrates many systems running asynchronously over the ...
Creative Director at the Software Testing Club, Rob Lambert always has something to say about testing. Lambert regularly blogs at TheSocialTester where he engages his readers with test cases, perspectives and trends. “Because It’s Always Been Done This Way” Study the following (badly drawn) image and see if there is anything obvious popping in to ...
Dr. Cem Kaner – Director, Center for Software Testing Education & Research, Florida Institute of Technology PC World Vietnam: What did you think of VISTACON 2010? Dr. Kaner: I am very impressed that the event was very professionally organized and happy to meet my old colleagues to share and exchange more about our area of ...
Companies generally consider the software they own, whether it is created in-house or acquired, as an asset (something that could appear on the balance sheet). The production of software impacts the profit and loss accounts for the year it is produced: The resources used to produce the software result in costs, and methods, tools, or ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe