Testing Smoke Detectors

People rely on software more every year, so it’s critical to test it. But one thing that gets overlooked (that should be tested regularly) are smoke detectors.

As the relatively young field of software quality engineering matures with all its emerging trends and terminology, software engineers often overlook that the software they test has parallels to something they should test regularly at home: their smoke detectors.

A silent smoke detector gives occupants peace of mind; no news is good news. But smoke detectors need to be tested periodically to assure they are still alive and are capable of saving lives. Since humans are relying more and more on software every year, testing is critical. Software bugs can result in a broad spectrum of consequences, from wrong typefaces, to a catastrophic loss of lives.

A smoke detector has essentially three components: a power supply, a smoke sensor, and an alarm unit. Each component is tested in a different manner, individually, and combined. Similarly, modern software is divided into individual modules written by different developers, are constantly changed and replaced, and may not be compatible with each other. 

The power supply, or battery, has a built-in unit test, the LED that indicates the battery has adequate voltage. The user’s role is to habitually verify visually that the LED is on. Modern smoke detectors can throw exceptions, trigger a chirping noise or recorded voice message when the battery is weak. Still, the LED and the low power warnings only test the power supply.

The alarm unit is the main component of the user manual test. An alarm unit emits an audible alarm when it receives an input current.

This is a fairly standard test case. The input conditions are, a smoke detector is ready and installed and equipped with a battery. The input “data” is manual pressure on the test button. The expected output “data” is an audible alarm. The expected output conditions are that the device can be silenced manually and reset, also known as teardown tasks. 

The alarm unit is a black box under test. We are not concerned with how the alarm turns currents into sound, just that it sounds when triggered.

The alarm unit has presumably been unit tested at the factory. When we do a manual test of the smoke detector, we are doing an integration test of the partial system, verifying that two previously tested components, the battery and alarm unit, will function together. The alarm sounding after the test button is depressed verifies that the power supply, the alarm unit, and the connecting wires (the interface between the two units) all function properly.

The test button is not an integral part of the system under test. It is a test harness, that aids in testing. It contributes nothing to the intended purpose of alerting inhabitants of a fire. A smoke detector would function the same without a test button or an LED, we just could not test it.

The aforementioned manual black box integration test still misses one key system component: the smoke sensor. When the test button is pressed, it feeds currents directly to the alarm unit, bypassing the smoke sensor. Hearing the alarm after pressing the button does not prove that the smoke detector will react to actual smoke. A test harness feeds artificial input data, rather than output data from upstream, to a component under test, to observe output. The component undergoes bottom-up testing.

The smoke sensor is essentially a glorified switch that allows currents to pass through when near smoke, and blocks the flow otherwise. We trust the manufacturer to test the smoke sensor for a lifetime of service. A research lab presumably has some sort of “smoke room”, which simulates the structure and air flow of the rooms where end users will place the smoke detectors. Researchers can place multiple smoke sensors around the room and remotely introduce smoke of different types and concentration levels.

It is not necessary here to know how a smoke sensor actually senses smoke; the test is to verify that a smoke sensor will emit an output current when surrounded by smoke. Also, instead of alarm units, the smoke sensors under test are connected to stubs or recorders, and undergoes top-down testing. Using stubs has many advantages over real output components. 

With stubs, many different smoke sensors can be under test at once. Each stub records if and when the sensor under test emits an output current, and directly populates a database for analysis. Also, a human does not have to enter the smoke filled room when the test is underway. Furthermore, the same smoke sensors may be connected to different output devices, alarm units, voice speakers, fire sprinklers, or a direct fire department connection. A stub can be a substitute for any kind of output device. Similarly, a software module under test may be designed to call other modules which are not under test or are yet to be written; these called modules are replaced with stubs, which may be merely a single line of code to print “module xyz is called and run here”.

Of course, this testing with smoke is not to be confused with a smoke test, in which a new component is connected and launched, just to assert it will power on without “making smoke”, to determine if further testing can or should start.

A system test will verify that a complete smoke detector emits an alarm of a certain decibel level, when surrounded by smoke containing a certain carbon monoxide concentration level.

An acceptance test, following a system test, will validate that the smoke detector is suitable to protect a particular household from fire, that the alarm can wake up the residents through closed doors, and ideally will not report false positives when triggered by smoke from cooking, ashtrays, incense, and what not, given the layout of the house and the residents’ lifestyle.

It is cumbersome and dangerous to test a smoke detector with real smoke as few homeowners do this regularly. They do periodic integration tests, and rely on unit tests. Besides, many home smoke detectors get unintended system tests, when smoke from regular kitchen cooking triggers the alarm.

Fred Murphy
Fred Murphy grew up in Menlo Park where he started programming on a TRS-80. He received his degree in Computer Science from Loyola University in Maryland. Fred has done Software Quality Assurance Contracting around Silicon Valley at companies such as: Apple, Intel, Adobe, KLA Tencor, LogiGear, and many others. His hobbies include bicycling, classical keyboarding, and coding in C and Python. Fred currently lives in Mountain View.
Fred Murphy on Linkedin

The Related Post

LogiGear Magazine – July 2011 – The Test Methods & Strategies Issue
There are many ways to approach test design. These approaches range from checklists to very precise algorithms in which test conditions are combined to achieve the most efficiency in testing. There are situations, such as in testing mobile applications, complex systems and cyber security, where tests need to be creative, cover a lot of functionality, ...
VISTACON 2010 – Keynote: The future of testing THE FUTURE OF TESTING BJ Rollison – Test Architect at Microsoft VISTACON 2010 – Keynote   BJ Rollison, Software Test Architect for Microsoft. Mr. Rollison started working for Microsoft in 1994, becoming one of the leading experts of test architecture and execution at Microsoft. He also teaches ...
Companies generally consider the software they own, whether it is created in-house or acquired, as an asset (something that could appear on the balance sheet). The production of software impacts the profit and loss accounts for the year it is produced: The resources used to produce the software result in costs, and methods, tools, or ...
LogiGear Magazine March Testing Essentials Issue 2017
  Explore It! is one of the very best software testing books ever written. It is packed with great ideas and Elisabeth Hendrickson’s writing style makes it very enjoyable to read. Hendrickson has a well-deserved reputation in the global software testing community as someone who has the enviable ability to clearly communicate highly-practical, well-thought-out ideas. ...
People who follow me on twitter or via my blog might be aware that I have a wide range of interests in areas outside my normal testing job. I like to research and learn different things, especially psychology and see if it may benefit and improve my skills and approaches during my normal testing job. ...
Trying to understand why fails, errors, or warnings occur in your automated tests can be quite frustrating. TestArchitect relieves this pain.  Debugging blindly can be tedious work—especially when your test tool does most of its work through the user interface (UI). Moreover, bugs can sometimes be hard to replicate when single-stepping through a test procedure. ...
As I write this article I am sitting at a table at StarEast, one of the major testing conferences. As you can expect from a testing conference, a lot of talk and discussion is about bugs and how to find them. What I have noticed in some of these discussions, however, is a lack of ...
Introduction All too often, senior management judges Software Testing success through the lens of potential cost savings. Test Automation and outsourcing are looked at as simple methods to reduce the costs of Software Testing; but, the sad truth is that simply automating or offshoring for the sake of automating or offshoring will only yield poor ...
When You’re Out to Fix Bottlenecks, Be Sure You’re Able to Distinguish Them From System Failures and Slow Spots Bottlenecks are likely to be lurking in your application. Here’s how you as a performance tester can find them. This article first appeared in Software Test & Performance, May 2005. So you found an odd pattern ...
“Combinatorial testing can detect hard-to-find software faults more efficiently than manual test case selection methods.” Developers of large data-intensive software often notice an interesting—though not surprising—phenomenon: When usage of an application jumps dramatically, components that have operated for months without trouble suddenly develop previously undetected errors. For example, newly added customers may have account records ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe