Black Box Software Testing: Test Design Course

A few months ago, Dr. Rebecca Fiedler and I published BBST—Test Design. This third course completes the Black Box Software Testing (BBST) set. The other two courses are BB ST Foundations and BBST Bug Advocacy. This article offers some information about the series, the design of the series (and the underlying instructional theory) and why you might be interested in it.

Cem Kaner

All of the existing courseware is available to you for free.

You can download the courseware directly from my lab’s website, www.testingeducation.org/BBST or from our corporate website, www.bbst.info.

If you are reading this article a few years from now and if bbst.info and www.testingeducation.org are outdated, check the National Science Digital Library, www.nsdl.org. NSDL is an electronic library of instructional materials funded by the National Science Foundation of the United States. BBST has been accepted as a collection in NSDL.

The courseware is free, but not cheap. The underlying research and development cost over a million dollars over a period of 11 years. But the public has already paid for these materials, so there is no ethical basis for making you pay for them again.

The way this came about is that I returned to school (Florida Tech) in 2000 to improve my teaching. As a financially successful testing trainer in the 1990’s, I was not satisfied with the results of my courses or the courses of many colleagues whom I knew and respected. In our short courses, we overwhelmed students with information, most of which they forgot. Our three days with them didn’t give them enough time to practice and develop skills; to try techniques on hard problems and get feedback; or to apply the ideas to their own work and come back with critical questions when their first try at application didn’t work. I wanted to develop courses that could make a more significant difference to the technical knowledge and cognitive skills of my students. Soon after I came to Florida Tech, the National Science Foundation agreed to support my research on testing and on online learning. In return, I agreed that courseware that I created with NSF support would be available to the public for free.

Our courseware includes:

Videotaped lectures and their accompanying slides (currently 1065 slides).

  • Readings: These are articles and book chapters (some specially updated for this course) authored by leading people in the field.
  • Assignments and study guide questions to help you work through the material with your friends. We’ve included suggestions for using and grading the assignments and questions in our Instructor’s Manual.

The best way to work through these materials is in a group. The members of the group work through the lectures together, do assignments and take tests together, apply the ideas to their day-to-day work and discuss the results. Some examples of how our courseware is being used are:

  • The Association for Software Testing (AST) offers the three courses to its members at a low price. AST can afford such a low price because its instructors teach the course for free. (Not surprisingly, AST can only offer these courses a few times per year.)
  • Some companies train one of their staff to be a lead trainer for BBST at their company. That person then coordinates classes at the company.
  • Several universities use our materials in courses that they offer.
  • Some people choose to form their own study group and work through the materials together.
  • Some professional trainers offer this course to the public or to companies. These people charge normal commercial rates for the course.

Dr. Fiedler and I offer the course to companies who contract with us to train their staff – we often customize the course a little for the client company. (We charge normal commercial rates for this service.) Dr. Fiedler, Doug Hoffman and I are just finishing the Black Box Software Testing Instructor’s Manual. We’ve been using drafts of this (now 400-page) book for three years to help people improve their online teaching skills and learn how to teach the BBST courses. We’ve added a lot of detail to this final draft. We hope that experienced trainers will be able to learn how to teach BBST from the book, even if they cannot come to one of our instructor training courses.

The book is now in final production. We expect printed copies at Amazon.com by June (maybe sooner). Electronic copies of the book will also be available at testingeducation.org, for free.

Dr. Fiedler and I are also writing a Students’ Workbook for the BBST Courses. Our primary goal is to support people who are studying the material on their own. We also want to support instructors (and students) who find it easier to work with a course like this if it has a textbook. The Students’ Workbook adds notes on every slide, adds many more references, review questions, exam questions and suggests several more learning activities. The Students’ Workbook won’t be free – the research support funds that we relied on to help us develop the courses and Instructor’s Manual just don’t stretch far enough to subsidize the Students’ Workbook as well. So this will published commercially at a price we expect to be reasonable for a technical book. We anticipate (but cannot guarantee) that the workbook will be available by September 2012.

Principles Underlying the Courses

BBST starts from the premise that software testing is a cognitively complex activity. We think people are pretty smart. That includes junior testers. We don’t think that the difference between junior testers and seniors is that juniors should do routine tasks and seniors use their brains. Probably every tester has some routine, boring work to do. But we think that every tester, no matter how junior, should also exercise skill and judgment, critical thinking and creativity. More experienced testers can handle more complex tasks with less supervision, but in our opinion, even the most junior testers should be trained and managed to apply their own judgment and skill to their work.

We see testing as an empirical, technical investigation conducted to provide stakeholders with information about the quality of the product or service under test.

This definition of testing carries some important implications for our practice as testers and as educators:

  • Different stakeholders need (or want) different information. One person might want to estimate the cost of supporting the software and what needs to change to reduce support costs. Another person might want to compare the quality to competing products. A third might want to compare the product to a specification in a contract. And a fourth might want to understand whether an accident that killed someone was caused by defects in this software. These are all reasonable things to want to know. An important part of our work, as testing-service providers, is to find the kinds of information that our clients want to know and to communicate that information clearly to them.
  • Testing is an empirical activity – we gain our knowledge by running experiments (we call them tests). Humans have been thinking about how to systematically gain empirical knowledge for about 2000 years. We have a lot to gain from a study of the history and methods of science.
  • Different testing techniques are better suited for exposing different types of information. For example, some techniques are very powerful for hunting bugs, but other techniques are more effective for highlighting issues that will cause customer confusion, dissatisfaction and calls for support. To do the job well, a tester must know many techniques and must understand these techniques’ strengths and blind spots. In being so prepared, she is then in a position to pick the right set of techniques for collecting the types of information needed for whatever project she is working on.

Many courses focus on definitions. Definitions are easy to teach. Definitions are easy to test – especially on multiple-choice exams that can be graded by computers. But in our view, having knowledge of definitions is not very important. Knowing the definition of a technique does not tell you when to use it, or how to use it, and even if you can memorize a definition that includes a description of how to use it, you won’t be able to use it well without successful practice.

We present definitions because we must, but our objective is the development of judgment and skill.

We use multiple-choice tests as aids for review. Our multiple-choice tests are open book. Our questions are more difficult than most because we use them to teach and to stimulate discussion. When we teach the course to university students and to practitioners, we never use the multiple-choice test results to decide whether a student passed or failed the course.

We will publish a collection of multiple-choice review questions in the Students’ Workbook. If you study the course material on your own or with a group of friends, don’t worry if you get questions wrong. Use the feedback as a guide to learning more.

We use essay exams and assignments as our most important teaching tools. There is now plenty of research establishing that significant learning takes place when students write exams. We provide study guides with large collections of essay questions and design our exams to encourage students to create practice exams in preparation for our finals. With the assignments, we apply the course ideas to real products. For example, in the Bug Advocacy course, students join the test team of a significant open source project (typically Open Office or Firefox) and help the team replicate and clarify its bug reports. In the Test Design course, students assess specifications and documentation from significant projects (such as Google Docs) and apply risk analysis and boundary analysis to variables from a well-known product (such as Open Office). We design the assignments so that any company teaching the course to its own staff can easily modify them for use on its own products.

The Courses

The BBST series includes three courses:

BBST Foundations presents the basic concepts of software testing and helps students develop online learning skills:

  • We introduce the basic vocabulary of the field. The most important part of this is a review of (or for some students, an introduction to) basic facts of data storage and manipulation and of the flow of control in computer programs.
  • We also present definitions of testing concepts because we need to establish a common vocabulary for the course. But along with these, we present our viewpoint that there are many definitions for most testing concepts and that the differences often reflect genuine disagreements about the nature of competent testing. We have a lot to learn, in our field, before those disagreements will be resolved. We teach testers that the best way to communicate with other people is to figure out what those people mean when they say something – ask them questions! – instead of assuming that they use the words the same way an instructor did in a testing course.
  • We also introduce the key challenges of software testing:
    • The best strategy for testing a project (including the choice of techniques) varies. It depends on the informational needs of the project’s stakeholders.
    • An oracle is something that helps you decide whether a program passed or failed a test. For example, the test’s “expected results” are an oracle. However, all oracles are incomplete and they can be misleading. The pass/no-pass decision requires judgment, not just specifications and procedures.
    • There are hundreds of ways to measure coverage, each emphasizing different risks. We teach enough about programming for students to understand the simple code-structure coverage measures (such as branch coverage and multi-condition coverage) but we also explain why this is a narrow sample of the testing that can expose significant problems. Complete structural coverage is not complete coverage.
    • Complete testing is impossible. On the way to teaching this, we review some discrete mathematics to help students understand how to estimate how many tests it would take to fully test some simple parts of programs.
    • Finally, we introduce students to software metrics, the theory of measurement, and the risks of measurements that are not skillfully designed. Most metrics of software quality and of the effectiveness or productivity of development and testing staff are human performance measures. We explain measurement dysfunction (the very serious risk of making things worse by relying on bad metrics). This is just an introduction. I teach a full-semester course on software metrics (about the same amount of content as Foundations + Bug Advocacy combined) and hope to bring that online in the next few years.

BBST Bug Advocacy presents bug reporting as a communications challenge. Bug reports are not just neutral technical reports. They are persuasive documents. The key goal of the bug report author is to provide high-quality information, well written, to help stakeholders make wise decisions about which bugs to fix.

  • We define key concepts (such as software error, quality, and the bug processing workflow). Of course, we present the diversity of views. There are many different definitions of quality – not just different words. Different ideas about what “quality” means and what counts as a departure from high quality.
  • We consider the scope of bug reporting: how should testers decide what findings they should report as bugs and what information to include in the reports?
  • We present bug reporting as persuasive writing. The tester makes the case that someone should take this bug seriously – and hopefully fix it. Students learn how to write more persuasively. Within this topic, we apply psychological research on decision-making and decision-affecting biases to consideration of how project managers and executives make decisions about bug reports.
  • And of course, we provide a lot of tips for troubleshooting bugs and for making them more reproducible.

BBST Test Design surveys over 100 test techniques. You can’t select the right techniques for your context if you’re not familiar with a wide array of techniques.

  • We take a detailed look at function testing, testing tours, risk-based testing, specification-based testing, scenario testing, domain testing, and some types of combination testing. We consider their strengths and weaknesses, what they are useful for, what they are likely to miss, and how much work it takes to use them well.
  • We suggest that a testing strategy combines a set of testing techniques that will be, together, effective for providing the stakeholders with the types of information they want to get from testing.
  • We present ways to compare the strengths of different techniques, to help students combine techniques that have complementary strengths.
  • And in the process, we present some tools for concept mapping, active reading, specification analysis, and combinatorial analysis.

Summing Up

Good training should help you and your staff become better testers.

Good exams ask the kinds of questions that job interviewers actually care about. People don’t really care whether a tester knows the words of a definition of a technique (or, at least, they shouldn’t care!). They care whether the tester can actually use that technique to find bugs or to find other useful information.

Good assignments provide authentic tasks – tasks that students recognize as real-world, and that they realize will help them prepare to do their own real work. Good assignments help students develop skills. (You get better at skilled work with practice and feedback.)

The BBST series is not a set of miracle courses. These courses do require a lot of work. But many of our students have found them invaluable. You can read the testimonials at http://bbst.info/?page_id=35, or just do a Google search on < software testing course “bbst” > and you’ll find plenty of reviews online.

If you do use our courseware, we’d love to hear back from you on how you are using it and how it is working for you. This feedback (including negative feedback) will help us improve our course designs for the future. It is also useful for the organizations that support our research.

About the Author

Cem Kaner is a Professor of Software Engineering at the Florida Institute of Technology. He combines an academic background (doctorates in psychology and in law) with extensive software development experience (programmer, tester, manager, director, etc.) in Silicon Valley. Kaner is co-author of Testing Computer Software with Hung Quoc Nguyen and Jack Falk. He also co-authored Bad Software and Lessons Learned in Software Testing.

BBST is a registered trademark of Kaner, Fiedler & Associates, LLC

We acknowledge the support of NSF research grants EIA-0113539 ITR/SY+PE: “Improving the Education of Software Testers” and CCLI-0717613 “Adaptation & Implementation of an Activity-Based Online or Hybrid Course in Software Testing.” We also appreciate the generous support of Texas Instruments, IBM/Rational and the Florida Institute of Technology and generous contributions of course content from Hung Quoc Nguyen, Douglas L. Hoffman, James Bach, and Dr. Rebecca Fiedler, Co-Principal Investigator of the National Science Foundation grants. Dr. Fiedler holds a master’s in business administration and a doctorate in education. Along with leading the instructional design of BBST, she has designed online courses for academic students and government agencies. Any opinions, findings and conclusions or recommendations expressed in this article or in the courseware are those of the authors and do not necessarily reflect the views of the National Science Foundation or our other donors.

J.J.G.  Van Merrienboer, Training complex cognitive skills: A four-component instructional design model for technical training (Englewood Cliffs, NJ: Educational Technology Publications, 1997).

Doug Rohrer and Pashler, Harold, “Recent Research on Human Learning Challenges Conventional Instructional Strategies,” Educational Researcher 39, no. 5 (2010): 406-412, http://edr.sagepub.com/content/39/5/406.abstract; Eric Jaffe, “Will that be on the test?,” Observer of the Association for Psychological Science 21, no. 10 (2008): 18-21, http://www.psychologicalscience.org/observer/getArticle.cfm?id=2425.

At VISTACON 2011, Harry sat down with LogiGear Sr. VP, Michael Hackett, to discuss various training methodologies.

Harry Robinson is a Principal Software Design Engineer in Test (SDET) for Microsoft’s Bing team, with over twenty years of software development and testing experience at AT&T Bell Labs, HP, Microsoft, and Google, as well as time spent in the startup trenches. He currently works with Bing teams on developing effective test strategies across the product. While at Bell Labs, Harry pioneered the test generation system that won the 1995 AT&T Award for Outstanding Achievement in the Area of Quality. At Microsoft, he created the model-based test technology which won the Microsoft Best Practice Award in 2001.

You can watch his interview by clicking

Cem Kaner

Cem Kaner J.D., Ph.D., is a Professor of Software Engineering at Florida Institute of Technology, and the Director of Florida Tech’s Center for Software Testing Education & Research (CSTER) since 2004. He is perhaps best known outside academia as an advocate of software usability and software testing.
Cem Kaner
Cem Kaner J.D., Ph.D., is a Professor of Software Engineering at Florida Institute of Technology, and the Director of Florida Tech’s Center for Software Testing Education & Research (CSTER) since 2004. He is perhaps best known outside academia as an advocate of software usability and software testing.

The Related Post

Exploring key competences that endow a good games tester In this article, I will explore what I feel are the most important skills and attributes of a good game tester, and what type of mindset is needed for games testing.  I believe a good game tester should have. This is based on my experience and ...
Lab team brainstorming session Whether you work in engineering/product, operations, or even marketing, keeping your team trained and engaged with their work is a challenge that is universal for all managers. This is hard enough when your team is in-house, but what are you to do, when you have multiple teams to manage across different ...
According to Associate Press (February 9, 2010, Yuri Kageyama, AP Business Writer), “Toyota is recalling 437,000 Prius and other hybrid vehicles worldwide to fix brake problem.” Toyota is the world’s largest automaker with impeccable quality reputation up to now.
If you want to enjoy your job and not worry about lack of resources, or have old, outdated strategies, with failing or meaningless test automation – get help!   We all know about globalization. Markets are global, products are global, mobile is global and software development is a global. As a result, the workforce is ...
LogiGear Magazine – February 2011 – The Exploratory Testing Issue
Gondola by TestArchitect is a low-code Test Automation solution for End-to-End Testing across Web, API, and Mobile Applications. Gondola builds upon the TestArchitect family’s already-powerful testing capabilities in many ways, including faster mobile and web testing as well as heightened support for mobile application testing. LogiGear Product Manager Thuc Nguyen delves into why LogiGear created ...
LogiGear Magazine September 2012 – Integrated Test Platforms
Most software engineers intuitively perform BVA to some degree. By applying these guidelines, boundary testing will be more complete, thereby having a higher likelihood for error detection. Software is tested from two different perspectives: 1. Internal program logic is exercised using “white box” test case design techniques. 2. Software requirements are exercised using “black box” test case ...
BrianBrian is a delivery-focused manager with a wealth of multi-disciplinary IT experience, the last fifteen years of which have been mainly in quality assurance and testing. He has been fortunate enough to work for some of the world’s largest companies using some of the latest technologies and tools.As well as being responsible for the day-to-day ...
Incorporate Exploratory Testing into your Test Strategy and find better bugs faster! Description: This two-day course is designed to give test engineers a global understanding of exploratory testing. From why we do it and its uses to how we do it and the value of measurement, Exploratory Testing will be examined and practiced to empower ...

2 thoughts on “Black Box Software Testing: Test Design Course

  1. I’ve been looking forward to going through the BBST course work. I imagine I’ll want to go through it a few times. Will the Instructors Manual be available in multiple formats (PDF, ePub) or will it be organized as a online help doc? Thanks

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe