Jungle Testing

As I write this article I am sitting at a table at StarEast, one of the major testing conferences. As you can expect from a testing conference, a lot of talk and discussion is about bugs and how to find them. What I have noticed in some of these discussions, however, is a lack of differentiation between the different types of bugs that I think is essential for testing success.

To set the stage I would like to distinguish between different categories of bugs (For a more extensive differentiation of bugs look at the very interesting presentation by Giri Vijayaraghavan and Cem Kaner “Bug taxonomies: Use them to generate better tests“). The main categories of bugs I would like to introduce are:

  1. Coding bugs – Coding bugs are things that were implemented differently than intended or specified
  2. “Jungle bugs” – Jungle bugs are unexpected situations, not anticipated in the specifications and therefore not handled well in the code

The coding bugs are commonly the more straightforward ones to find and fix. Good unit testing should be able to catch many of them; and a lot of tests can be designed by following the requirements or specifications that are available.

The unexpected situations are harder. If they were not hard the situations would not be “unexpected”. Examples in this category are unanticipated user actions (“a user would never do this”), failing environment (an interrupted TCP/IP connection), unexpected data, etc. Some of these can also be malicious, like the common buffer overflow trick, where a hacker sends a deliberate extremely long value that will overflow an internal buffer and, by overwriting a return address, redirect a function call to malicious code.

A special category of coding bugs are what we could call “indirect bugs”. Indirect bugs are where one part of a system has a coding bug that leaves a bad value in a table or a variable that will cause a failure or crash in another part of the system. Even though the issue is a coding bug, to the affected part of the system it is an unexpected situation. Whether this is a jungle bug would, in my view, depend on how the un-expected value or situation is handled. If it causes a crash I would consider it a jungle bug. It is better for code never to crash regardless of what data is accidentally fed into it.

Can thinking about bugs this way help in finding and preventing them? My answer would be “yes”. It gives an extra differentiation in your test design. Not only would the design for the two categories be different, but the steps you take find the bugs can also be different.

The first suite of tests would aim for coding bugs. I would like to call them “functional tests”, which would include most of the unit testing. Such tests would be designed in a straightforward manner, directly related to system requirements and/or functional specifications. For example, one or more test cases are defined for each requirement, and a tester who knows the subject matter under test well should be able to produce most of them without involvement of others. For this category it can also make sense to measure code coverage, for which good tools are available.

Another suite of tests should look for jungle bugs. To name the kind of testing that caters to jungle bugs we could use the term “jungle testing”. For a test designer this is an ambitious task. You will have to look for potentially unexpected conditions or combinations of conditions. Some of the things that you could do include:

  • Focus more on the business than the requirements trying to find out which unusual events and circumstances can happen.
  • Talk to people who know the business or system under test well, like end users and business analysts.
  • Work together with other testers and discuss ideas in meetings.
  • Go for “depth” over “breadth” looking for hidden bugs in specific areas rather than broad coverage (which is more of an objective for the first category of tests that looks for coding bugs).
  • Ask and discuss “what if” questions like what if a user enters letters instead of numbers, etc.
  • Apply risk analysis to determine what should never happen with the system under test, and what conditions could cause such a result.
  • Use “exploratory testing”, a technique that lets testers, in pairs, work with a system interactively (see: What is Exploratory Testing? by James Bach). Since we are dealing with “jungles”, any technique with “exploration” in the name should be a good fit…

When designing jungle tests, work with the designers to determine the kind of situations the tests are going to address, thus allowing the design to be updated to cover the situations, or even to learn that handling a certain situation has no priority, and does not have to be tested. It is also good to have a generic requirement for unexpected situations: how resistant should a system be, and how should it typically respond. I can imagine that for most systems the requirement for an unusual and invalid input would be: (1) don’t crash, (2) give the user a message which can simply be a message like “invalid data”.

I feel that treating unexpected situation testing (“jungle testing”) as a separate category in test development can give additional focus, thus exceeding the aggressiveness that is usually achieved with regular requirement based test cases. Per situation a decision can be made if and to what extent this would be worth the efforts.

Hans Buwalda

Hans leads LogiGear’s research and development of test automation solutions, and the delivery of advanced test automation consulting and engineering services. He is a pioneer of the keyword approach for software testing organizations, and he assists clients in strategic implementation of the Action Based Testing™ method throughout their testing organizations.

Hans is also the original architect of LogiGear’s TestArchitect™, the modular keyword-driven toolset for software test design, automation and management. Hans is an internationally recognized expert on test automation, test development and testing technology management. He is coauthor of Integrated Test Design and Automation (Addison Wesley, 2001), and speaks frequently at international testing conferences.

Hans holds a Master of Science in Computer Science from Free University, Amsterdam.

Hans Buwalda
Hans Buwalda, CTO of LogiGear, is a pioneer of the Action Based and Soap Opera methodologies of testing and automation, and lead developer of TestArchitect, LogiGear’s keyword-based toolset for software test design, automation and management. He is co-author of Integrated Test Design and Automation, and a frequent speaker at test conferences.

The Related Post

“Combinatorial testing can detect hard-to-find software faults more efficiently than manual test case selection methods.” Developers of large data-intensive software often notice an interesting—though not surprising—phenomenon: When usage of an application jumps dramatically, components that have operated for months without trouble suddenly develop previously undetected errors. For example, newly added customers may have account records ...
Introduction Software Testing 3.0 is a strategic end-to-end framework for change based upon a strategy to drive testing activities, tool selection, and people development that finally delivers on the promise of software testing. For more details on the evolution of software testing and Software Testing 3.0 see: Software Testing 3.0: Delivering on the Promise of ...
Internet-based per-use service models are turning things upside down in the software development industry, prompting rapid expansion in the development of some products and measurable reduction in others. (Gartner, August 2008) This global transition toward computing “in the Cloud” introduces a whole new level of challenge when it comes to software testing.
In software testing, we need to devise an approach that features a gradual progression from the simplest criteria of testing to more sophisticated criteria. We do this via many planned and structured steps, each of which brings incremental benefits to the project as a whole. By this means, as a tester masters each skill or area ...
This article was adapted from a presentation titled “How to Turn Your Testing Team Into a High-Performance Organization” to be presented by Michael Hackett, LogiGear Vice President, Business Strategy and Operations, at the Software Test & Performance Conference 2006 at the Hyatt Regency Cambridge, Massachusetts (November 7 – 9, 2006). Introduction Testing is often looked ...
March Issue 2019: Leading the Charge with Better Test Methods
Training has to be fun. Simple as that. To inspire changed behaviors and adoption of new practices, training has to be interesting, motivating, stimulating and challenging. Training also has to be engaging enough to maintain interest, as trainers today are forced to compete with handheld mobile devices, interruptions from texting, email distractions, and people who think they ...
In today’s mobile-first world, a good app is important, meaning an effective Mobile Testing strategy is  essential.  
Think you’re up for a challenge? Print this word search out! See if you can find all the words and learn a few new software testing terms in the process. To see how you’ve done, check your answers in the answer key below. *You can check the answer key here.
Differences in interpretation of requirements and specifications by programmers and testers is a common source of bugs. For many, perhaps most, development teams the terms requirement and specification are used interchangeably with no detrimental effect. In everyday development conversations the terms are used synonymously, one is as likely to mean the “spec” as the “requirements.”
I’ve been reviewing a lot of test plans recently. As I review them, I’ve compiled this list of things I look for in a well written test plan document. Here’s a brain dump of things I check for, in no particular order, of course, and it is by no means a complete list. That said, if you ...
Experience-based recommendations to test the brains that drive the devices In essentially every embedded system there is some sort of product testing. Typically there is a list of product-level requirements (what the product does), and a set of tests designed to make sure the product works correctly. For many products there is also a set ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe