Jungle Testing

As I write this article I am sitting at a table at StarEast, one of the major testing conferences. As you can expect from a testing conference, a lot of talk and discussion is about bugs and how to find them. What I have noticed in some of these discussions, however, is a lack of differentiation between the different types of bugs that I think is essential for testing success.

To set the stage I would like to distinguish between different categories of bugs (For a more extensive differentiation of bugs look at the very interesting presentation by Giri Vijayaraghavan and Cem Kaner “Bug taxonomies: Use them to generate better tests“). The main categories of bugs I would like to introduce are:

  1. Coding bugs – Coding bugs are things that were implemented differently than intended or specified
  2. “Jungle bugs” – Jungle bugs are unexpected situations, not anticipated in the specifications and therefore not handled well in the code

The coding bugs are commonly the more straightforward ones to find and fix. Good unit testing should be able to catch many of them; and a lot of tests can be designed by following the requirements or specifications that are available.

The unexpected situations are harder. If they were not hard the situations would not be “unexpected”. Examples in this category are unanticipated user actions (“a user would never do this”), failing environment (an interrupted TCP/IP connection), unexpected data, etc. Some of these can also be malicious, like the common buffer overflow trick, where a hacker sends a deliberate extremely long value that will overflow an internal buffer and, by overwriting a return address, redirect a function call to malicious code.

A special category of coding bugs are what we could call “indirect bugs”. Indirect bugs are where one part of a system has a coding bug that leaves a bad value in a table or a variable that will cause a failure or crash in another part of the system. Even though the issue is a coding bug, to the affected part of the system it is an unexpected situation. Whether this is a jungle bug would, in my view, depend on how the un-expected value or situation is handled. If it causes a crash I would consider it a jungle bug. It is better for code never to crash regardless of what data is accidentally fed into it.

Can thinking about bugs this way help in finding and preventing them? My answer would be “yes”. It gives an extra differentiation in your test design. Not only would the design for the two categories be different, but the steps you take find the bugs can also be different.

The first suite of tests would aim for coding bugs. I would like to call them “functional tests”, which would include most of the unit testing. Such tests would be designed in a straightforward manner, directly related to system requirements and/or functional specifications. For example, one or more test cases are defined for each requirement, and a tester who knows the subject matter under test well should be able to produce most of them without involvement of others. For this category it can also make sense to measure code coverage, for which good tools are available.

Another suite of tests should look for jungle bugs. To name the kind of testing that caters to jungle bugs we could use the term “jungle testing”. For a test designer this is an ambitious task. You will have to look for potentially unexpected conditions or combinations of conditions. Some of the things that you could do include:

  • Focus more on the business than the requirements trying to find out which unusual events and circumstances can happen.
  • Talk to people who know the business or system under test well, like end users and business analysts.
  • Work together with other testers and discuss ideas in meetings.
  • Go for “depth” over “breadth” looking for hidden bugs in specific areas rather than broad coverage (which is more of an objective for the first category of tests that looks for coding bugs).
  • Ask and discuss “what if” questions like what if a user enters letters instead of numbers, etc.
  • Apply risk analysis to determine what should never happen with the system under test, and what conditions could cause such a result.
  • Use “exploratory testing”, a technique that lets testers, in pairs, work with a system interactively (see: What is Exploratory Testing? by James Bach). Since we are dealing with “jungles”, any technique with “exploration” in the name should be a good fit…

When designing jungle tests, work with the designers to determine the kind of situations the tests are going to address, thus allowing the design to be updated to cover the situations, or even to learn that handling a certain situation has no priority, and does not have to be tested. It is also good to have a generic requirement for unexpected situations: how resistant should a system be, and how should it typically respond. I can imagine that for most systems the requirement for an unusual and invalid input would be: (1) don’t crash, (2) give the user a message which can simply be a message like “invalid data”.

I feel that treating unexpected situation testing (“jungle testing”) as a separate category in test development can give additional focus, thus exceeding the aggressiveness that is usually achieved with regular requirement based test cases. Per situation a decision can be made if and to what extent this would be worth the efforts.

Hans Buwalda

Hans leads LogiGear’s research and development of test automation solutions, and the delivery of advanced test automation consulting and engineering services. He is a pioneer of the keyword approach for software testing organizations, and he assists clients in strategic implementation of the Action Based Testing™ method throughout their testing organizations.

Hans is also the original architect of LogiGear’s TestArchitect™, the modular keyword-driven toolset for software test design, automation and management. Hans is an internationally recognized expert on test automation, test development and testing technology management. He is coauthor of Integrated Test Design and Automation (Addison Wesley, 2001), and speaks frequently at international testing conferences.

Hans holds a Master of Science in Computer Science from Free University, Amsterdam.

Hans Buwalda
Hans Buwalda, CTO of LogiGear, is a pioneer of the Action Based and Soap Opera methodologies of testing and automation, and lead developer of TestArchitect, LogiGear’s keyword-based toolset for software test design, automation and management. He is co-author of Integrated Test Design and Automation, and a frequent speaker at test conferences.

The Related Post

Introduction Many companies have come to realize that software testing is much more than a task that happens at the end of a software development cycle. They have come to understand that software testing is a strategic imperative and a discipline that can have a substantial impact on the success of an organization that develops ...
It’s a bird! It’s a plane! It’s a software defect of epic proportions.
Karen N. Johnson began as a technical writer in 1985 and later switched to software testing in 1992. She maintains a blog at TestingReflections, a collaborative site where she is featured as a main contributor. In her latest entry, she discusses search testing with different languages. Here is an excerpt from her blog: “I started ...
Think you’re up for a challenge? Print this word search out! See if you can find all the words and learn a few new software testing terms in the process. To see how you’ve done, check your answers in the answer key below. *You can check the answer key here.
LogiGear Magazine March Issue 2018: Under Construction: Test Methods & Strategy
Introduction Keyword-driven methodologies like Action Based Testing (ABT) are usually considered to be an Automation technique. They are commonly positioned as an advanced and practical alternative to other techniques like to “record & playback” or “scripting”.
One of the most dreaded kinds of bugs are the ones caused by fixes of other bugs or by code changes due to feature requests. I like to call these the ‘bonus bugs,’ since they come on top on the bug load you already have to deal with. Bonus bugs are the major rationale for ...
Are you looking for the best books on software testing methods? Here are 4 books that should be on your reading list! The Way of the Web Tester: A Beginner’s Guide to Automating Tests By Jonathan Rasmusson Whether you’re a traditional software tester, a developer, or a team lead, this is the book for you! It ...
Regardless of the method you choose, simply spending some time thinking about good test design before writing the first test case will have a very high payback down the line, both in the quality and the efficiency of the tests. Test design is the single biggest contributor to success in software testing and its also ...
The V-Model for Software Development specifies 4 kinds of testing: Unit Testing Integration Testing System Testing Acceptance Testing You can find more information here (Wikipedia): http://en.wikipedia.org/wiki/V-Model_%28software_development%29#Validation_Phases What I’m finding is that of those only the Unit Testing is clear to me. The other kinds maybe good phases in a project, but for test design it ...
Alexa Voice Service (AVS): Amazon’s service offering for a voice-controlled AI assistant. Offered in different products. Source: https://whatis.techtarget.com/definition/Alexa-Voice-Services-AVS Autopilot Short for “automatic pilot,” a device for keeping an aircraft on a set course without the intervention of the pilot. Source: https://en.oxforddictionaries.com/definition/us/automatic_pilot Blockchain Infrastructure: A complex, decentralized architecture that orchestrates many systems running asynchronously over the ...
Training has to be fun. Simple as that. To inspire changed behaviors and adoption of new practices, training has to be interesting, motivating, stimulating and challenging. Training also has to be engaging enough to maintain interest, as trainers today are forced to compete with handheld mobile devices, interruptions from texting, email distractions, and people who think they ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe