Jungle Testing

As I write this article I am sitting at a table at StarEast, one of the major testing conferences. As you can expect from a testing conference, a lot of talk and discussion is about bugs and how to find them. What I have noticed in some of these discussions, however, is a lack of differentiation between the different types of bugs that I think is essential for testing success.

To set the stage I would like to distinguish between different categories of bugs (For a more extensive differentiation of bugs look at the very interesting presentation by Giri Vijayaraghavan and Cem Kaner “Bug taxonomies: Use them to generate better tests“). The main categories of bugs I would like to introduce are:

  1. Coding bugs – Coding bugs are things that were implemented differently than intended or specified
  2. “Jungle bugs” – Jungle bugs are unexpected situations, not anticipated in the specifications and therefore not handled well in the code

The coding bugs are commonly the more straightforward ones to find and fix. Good unit testing should be able to catch many of them; and a lot of tests can be designed by following the requirements or specifications that are available.

The unexpected situations are harder. If they were not hard the situations would not be “unexpected”. Examples in this category are unanticipated user actions (“a user would never do this”), failing environment (an interrupted TCP/IP connection), unexpected data, etc. Some of these can also be malicious, like the common buffer overflow trick, where a hacker sends a deliberate extremely long value that will overflow an internal buffer and, by overwriting a return address, redirect a function call to malicious code.

A special category of coding bugs are what we could call “indirect bugs”. Indirect bugs are where one part of a system has a coding bug that leaves a bad value in a table or a variable that will cause a failure or crash in another part of the system. Even though the issue is a coding bug, to the affected part of the system it is an unexpected situation. Whether this is a jungle bug would, in my view, depend on how the un-expected value or situation is handled. If it causes a crash I would consider it a jungle bug. It is better for code never to crash regardless of what data is accidentally fed into it.

Can thinking about bugs this way help in finding and preventing them? My answer would be “yes”. It gives an extra differentiation in your test design. Not only would the design for the two categories be different, but the steps you take find the bugs can also be different.

The first suite of tests would aim for coding bugs. I would like to call them “functional tests”, which would include most of the unit testing. Such tests would be designed in a straightforward manner, directly related to system requirements and/or functional specifications. For example, one or more test cases are defined for each requirement, and a tester who knows the subject matter under test well should be able to produce most of them without involvement of others. For this category it can also make sense to measure code coverage, for which good tools are available.

Another suite of tests should look for jungle bugs. To name the kind of testing that caters to jungle bugs we could use the term “jungle testing”. For a test designer this is an ambitious task. You will have to look for potentially unexpected conditions or combinations of conditions. Some of the things that you could do include:

  • Focus more on the business than the requirements trying to find out which unusual events and circumstances can happen.
  • Talk to people who know the business or system under test well, like end users and business analysts.
  • Work together with other testers and discuss ideas in meetings.
  • Go for “depth” over “breadth” looking for hidden bugs in specific areas rather than broad coverage (which is more of an objective for the first category of tests that looks for coding bugs).
  • Ask and discuss “what if” questions like what if a user enters letters instead of numbers, etc.
  • Apply risk analysis to determine what should never happen with the system under test, and what conditions could cause such a result.
  • Use “exploratory testing”, a technique that lets testers, in pairs, work with a system interactively (see: What is Exploratory Testing? by James Bach). Since we are dealing with “jungles”, any technique with “exploration” in the name should be a good fit…

When designing jungle tests, work with the designers to determine the kind of situations the tests are going to address, thus allowing the design to be updated to cover the situations, or even to learn that handling a certain situation has no priority, and does not have to be tested. It is also good to have a generic requirement for unexpected situations: how resistant should a system be, and how should it typically respond. I can imagine that for most systems the requirement for an unusual and invalid input would be: (1) don’t crash, (2) give the user a message which can simply be a message like “invalid data”.

I feel that treating unexpected situation testing (“jungle testing”) as a separate category in test development can give additional focus, thus exceeding the aggressiveness that is usually achieved with regular requirement based test cases. Per situation a decision can be made if and to what extent this would be worth the efforts.

Hans Buwalda

Hans leads LogiGear’s research and development of test automation solutions, and the delivery of advanced test automation consulting and engineering services. He is a pioneer of the keyword approach for software testing organizations, and he assists clients in strategic implementation of the Action Based Testing™ method throughout their testing organizations.

Hans is also the original architect of LogiGear’s TestArchitect™, the modular keyword-driven toolset for software test design, automation and management. Hans is an internationally recognized expert on test automation, test development and testing technology management. He is coauthor of Integrated Test Design and Automation (Addison Wesley, 2001), and speaks frequently at international testing conferences.

Hans holds a Master of Science in Computer Science from Free University, Amsterdam.

Hans Buwalda
Hans Buwalda, CTO of LogiGear, is a pioneer of the Action Based and Soap Opera methodologies of testing and automation, and lead developer of TestArchitect, LogiGear’s keyword-based toolset for software test design, automation and management. He is co-author of Integrated Test Design and Automation, and a frequent speaker at test conferences.

The Related Post

Introduction Keyword-driven testing is a software testing technique that separates much of the programming work of test automation from the actual test design. This allows tests to be developed earlier and makes the tests easier to maintain. Some key concepts in keyword driven testing include:
From cross-device testing, to regression testing, to load testing, to data-driven testing, check out the types of testing that are suitable for Test Automation. Scene: Interior QA Department. Engineering is preparing for a final product launch with a deadline that is 12 weeks away. In 6 weeks, there will be a 1 week quality gate, ...
This article was adapted from a presentation titled “How to Optimize Your Web Testing Strategy” to be presented by Hung Q. Nguyen, CEO and founder of LogiGear Corporation, at the Software Test & Performance Conference 2006 at the Hyatt Regency Cambridge, Massachusetts (November 7 – 9, 2006). Click here to jump to more information on ...
March Issue 2020: Smarter Testing Strategies for The Modern SDLC
D. Richard Kuhn – Computer Scientist, National Institute of Standards & Technology LogiGear: How did you get into software testing? What did you find interesting about it? Mr. Kuhn: About 10 years ago Dolores Wallace and I were investigating the causes of software failures in medical devices, using 15 years of data from the FDA. ...
This article was developed from concepts in the book Global Software Test Automation: Discussion of Software Testing for Executives. Introduction Metrics are the means by which the software quality can be measured; they give you confidence in the product. You may consider these product management indicators, which can be either quantitative or qualitative. They are ...
Differences in interpretation of requirements and specifications by programmers and testers is a common source of bugs. For many, perhaps most, development teams the terms requirement and specification are used interchangeably with no detrimental effect. In everyday development conversations the terms are used synonymously, one is as likely to mean the “spec” as the “requirements.”
Trying to understand why fails, errors, or warnings occur in your automated tests can be quite frustrating. TestArchitect relieves this pain.  Debugging blindly can be tedious work—especially when your test tool does most of its work through the user interface (UI). Moreover, bugs can sometimes be hard to replicate when single-stepping through a test procedure. ...
Test organizations continue to undergo rapid transformation as demands grow for testing efficiencies. Functional test automation is often seen as a way to increase the overall efficiency of functional and system tests. How can a test organization stage itself for functional test automation before an investment in test automation has even been made? Further, how ...
With complex software systems, you can never test all of the functionality in all of the conditions that your customers will see. Start with this as a fact: You will never test enough! Step 2 in getting started is to read and re-read The Art of Software Testing by Glenford Myers. This classic will set the ...
Having developed software for nearly fifteen years, I remember the dark days before testing was all the rage and the large number of bugs that had to be arduously found and fixed manually. The next step was nervously releasing the code without the safety net of a test bed and having no idea if one ...
VISTACON 2010 – Keynote: The future of testing THE FUTURE OF TESTING BJ Rollison – Test Architect at Microsoft VISTACON 2010 – Keynote   BJ Rollison, Software Test Architect for Microsoft. Mr. Rollison started working for Microsoft in 1994, becoming one of the leading experts of test architecture and execution at Microsoft. He also teaches ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe