Jungle Testing

As I write this article I am sitting at a table at StarEast, one of the major testing conferences. As you can expect from a testing conference, a lot of talk and discussion is about bugs and how to find them. What I have noticed in some of these discussions, however, is a lack of differentiation between the different types of bugs that I think is essential for testing success.

To set the stage I would like to distinguish between different categories of bugs (For a more extensive differentiation of bugs look at the very interesting presentation by Giri Vijayaraghavan and Cem Kaner “Bug taxonomies: Use them to generate better tests“). The main categories of bugs I would like to introduce are:

  1. Coding bugs – Coding bugs are things that were implemented differently than intended or specified
  2. “Jungle bugs” – Jungle bugs are unexpected situations, not anticipated in the specifications and therefore not handled well in the code

The coding bugs are commonly the more straightforward ones to find and fix. Good unit testing should be able to catch many of them; and a lot of tests can be designed by following the requirements or specifications that are available.

The unexpected situations are harder. If they were not hard the situations would not be “unexpected”. Examples in this category are unanticipated user actions (“a user would never do this”), failing environment (an interrupted TCP/IP connection), unexpected data, etc. Some of these can also be malicious, like the common buffer overflow trick, where a hacker sends a deliberate extremely long value that will overflow an internal buffer and, by overwriting a return address, redirect a function call to malicious code.

A special category of coding bugs are what we could call “indirect bugs”. Indirect bugs are where one part of a system has a coding bug that leaves a bad value in a table or a variable that will cause a failure or crash in another part of the system. Even though the issue is a coding bug, to the affected part of the system it is an unexpected situation. Whether this is a jungle bug would, in my view, depend on how the un-expected value or situation is handled. If it causes a crash I would consider it a jungle bug. It is better for code never to crash regardless of what data is accidentally fed into it.

Can thinking about bugs this way help in finding and preventing them? My answer would be “yes”. It gives an extra differentiation in your test design. Not only would the design for the two categories be different, but the steps you take find the bugs can also be different.

The first suite of tests would aim for coding bugs. I would like to call them “functional tests”, which would include most of the unit testing. Such tests would be designed in a straightforward manner, directly related to system requirements and/or functional specifications. For example, one or more test cases are defined for each requirement, and a tester who knows the subject matter under test well should be able to produce most of them without involvement of others. For this category it can also make sense to measure code coverage, for which good tools are available.

Another suite of tests should look for jungle bugs. To name the kind of testing that caters to jungle bugs we could use the term “jungle testing”. For a test designer this is an ambitious task. You will have to look for potentially unexpected conditions or combinations of conditions. Some of the things that you could do include:

  • Focus more on the business than the requirements trying to find out which unusual events and circumstances can happen.
  • Talk to people who know the business or system under test well, like end users and business analysts.
  • Work together with other testers and discuss ideas in meetings.
  • Go for “depth” over “breadth” looking for hidden bugs in specific areas rather than broad coverage (which is more of an objective for the first category of tests that looks for coding bugs).
  • Ask and discuss “what if” questions like what if a user enters letters instead of numbers, etc.
  • Apply risk analysis to determine what should never happen with the system under test, and what conditions could cause such a result.
  • Use “exploratory testing”, a technique that lets testers, in pairs, work with a system interactively (see: What is Exploratory Testing? by James Bach). Since we are dealing with “jungles”, any technique with “exploration” in the name should be a good fit…

When designing jungle tests, work with the designers to determine the kind of situations the tests are going to address, thus allowing the design to be updated to cover the situations, or even to learn that handling a certain situation has no priority, and does not have to be tested. It is also good to have a generic requirement for unexpected situations: how resistant should a system be, and how should it typically respond. I can imagine that for most systems the requirement for an unusual and invalid input would be: (1) don’t crash, (2) give the user a message which can simply be a message like “invalid data”.

I feel that treating unexpected situation testing (“jungle testing”) as a separate category in test development can give additional focus, thus exceeding the aggressiveness that is usually achieved with regular requirement based test cases. Per situation a decision can be made if and to what extent this would be worth the efforts.

Hans Buwalda

Hans leads LogiGear’s research and development of test automation solutions, and the delivery of advanced test automation consulting and engineering services. He is a pioneer of the keyword approach for software testing organizations, and he assists clients in strategic implementation of the Action Based Testing™ method throughout their testing organizations.

Hans is also the original architect of LogiGear’s TestArchitect™, the modular keyword-driven toolset for software test design, automation and management. Hans is an internationally recognized expert on test automation, test development and testing technology management. He is coauthor of Integrated Test Design and Automation (Addison Wesley, 2001), and speaks frequently at international testing conferences.

Hans holds a Master of Science in Computer Science from Free University, Amsterdam.

Hans Buwalda
Hans Buwalda, CTO of LogiGear, is a pioneer of the Action Based and Soap Opera methodologies of testing and automation, and lead developer of TestArchitect, LogiGear’s keyword-based toolset for software test design, automation and management. He is co-author of Integrated Test Design and Automation, and a frequent speaker at test conferences.

The Related Post

The 12 Do’s and Don’ts of Test Automation When I started my career as a Software Tester a decade ago, Test Automation was viewed with some skepticism.
This article was developed from concepts in the book Global Software Test Automation: Discussion of Software Testing for Executives. Introduction When thinking of the types of Software Testing, many mistakenly equate the mechanism by which the testing is performed with types of Software Testing. The mechanism simply refers to whether you are using Manual or ...
Introduction This article discusses the all-too-common occurrence of the time needed to perform Software Testing being short changed as specification, development, and unforeseen “issues” cause the phases prior to testing to expand. The result is that extreme pressure is placed upon the testing organization to perform the testing function within a reduced time frame. The ...
I’ve been reviewing a lot of test plans recently. As I review them, I’ve compiled this list of things I look for in a well written test plan document. Here’s a brain dump of things I check for, in no particular order, of course, and it is by no means a complete list. That said, if you ...
LogiGear Magazine March Issue 2021: Metrics & Measurements: LogiGear’s Guide to QA Reporting and ROI
LogiGear_Magazine–March_2015–Testing_Strategies_and_Methods-Fast_Forward_To_Better_Testing
First, let me ask you a few questions. Are your bugs often rejected? Are your bugs often assigned back to you and discussed back and forth to clarify information? Do your leaders or managers often complain about your bugs?
With complex software systems, you can never test all of the functionality in all of the conditions that your customers will see. Start with this as a fact: You will never test enough! Step 2 in getting started is to read and re-read The Art of Software Testing by Glenford Myers. This classic will set the ...
People rely on software more every year, so it’s critical to test it. But one thing that gets overlooked (that should be tested regularly) are smoke detectors. As the relatively young field of software quality engineering matures with all its emerging trends and terminology, software engineers often overlook that the software they test has parallels ...
Test plans have a bad reputation, and perhaps, they deserve it! There’s no beating around the bush. But times have changed. Systems are no longer “black boxes” where QA Teams are separated from design, input, and architecture. Test teams are much more technically savvy and knowledgeable about their systems, beyond domain knowledge. This was an old ...
With this edition of LogiGear Magazine, we introduce a new feature, Mind Map. A mind map is a diagram, usually devoted to a single concept, used to visually organize related information, often in a hierarchical or interconnected, web-like fashion. This edition’s mind map, created by Sudhamshu Rao, focuses on tools that are available to help ...
Test design is the single biggest contributor to success in software testing. Not only can good test design result in good coverage, it is also a major contributor to efficiency. The principle of test design should be “lean and mean.” The tests should be of a manageable size and at the same time complete and ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe