Testing Smoke Detectors

People rely on software more every year, so it’s critical to test it. But one thing that gets overlooked (that should be tested regularly) are smoke detectors.

As the relatively young field of software quality engineering matures with all its emerging trends and terminology, software engineers often overlook that the software they test has parallels to something they should test regularly at home: their smoke detectors.

A silent smoke detector gives occupants peace of mind; no news is good news. But smoke detectors need to be tested periodically to assure they are still alive and are capable of saving lives. Since humans are relying more and more on software every year, testing is critical. Software bugs can result in a broad spectrum of consequences, from wrong typefaces, to a catastrophic loss of lives.

A smoke detector has essentially three components: a power supply, a smoke sensor, and an alarm unit. Each component is tested in a different manner, individually, and combined. Similarly, modern software is divided into individual modules written by different developers, are constantly changed and replaced, and may not be compatible with each other. 

The power supply, or battery, has a built-in unit test, the LED that indicates the battery has adequate voltage. The user’s role is to habitually verify visually that the LED is on. Modern smoke detectors can throw exceptions, trigger a chirping noise or recorded voice message when the battery is weak. Still, the LED and the low power warnings only test the power supply.

The alarm unit is the main component of the user manual test. An alarm unit emits an audible alarm when it receives an input current.

This is a fairly standard test case. The input conditions are, a smoke detector is ready and installed and equipped with a battery. The input “data” is manual pressure on the test button. The expected output “data” is an audible alarm. The expected output conditions are that the device can be silenced manually and reset, also known as teardown tasks. 

The alarm unit is a black box under test. We are not concerned with how the alarm turns currents into sound, just that it sounds when triggered.

The alarm unit has presumably been unit tested at the factory. When we do a manual test of the smoke detector, we are doing an integration test of the partial system, verifying that two previously tested components, the battery and alarm unit, will function together. The alarm sounding after the test button is depressed verifies that the power supply, the alarm unit, and the connecting wires (the interface between the two units) all function properly.

The test button is not an integral part of the system under test. It is a test harness, that aids in testing. It contributes nothing to the intended purpose of alerting inhabitants of a fire. A smoke detector would function the same without a test button or an LED, we just could not test it.

The aforementioned manual black box integration test still misses one key system component: the smoke sensor. When the test button is pressed, it feeds currents directly to the alarm unit, bypassing the smoke sensor. Hearing the alarm after pressing the button does not prove that the smoke detector will react to actual smoke. A test harness feeds artificial input data, rather than output data from upstream, to a component under test, to observe output. The component undergoes bottom-up testing.

The smoke sensor is essentially a glorified switch that allows currents to pass through when near smoke, and blocks the flow otherwise. We trust the manufacturer to test the smoke sensor for a lifetime of service. A research lab presumably has some sort of “smoke room”, which simulates the structure and air flow of the rooms where end users will place the smoke detectors. Researchers can place multiple smoke sensors around the room and remotely introduce smoke of different types and concentration levels.

It is not necessary here to know how a smoke sensor actually senses smoke; the test is to verify that a smoke sensor will emit an output current when surrounded by smoke. Also, instead of alarm units, the smoke sensors under test are connected to stubs or recorders, and undergoes top-down testing. Using stubs has many advantages over real output components. 

With stubs, many different smoke sensors can be under test at once. Each stub records if and when the sensor under test emits an output current, and directly populates a database for analysis. Also, a human does not have to enter the smoke filled room when the test is underway. Furthermore, the same smoke sensors may be connected to different output devices, alarm units, voice speakers, fire sprinklers, or a direct fire department connection. A stub can be a substitute for any kind of output device. Similarly, a software module under test may be designed to call other modules which are not under test or are yet to be written; these called modules are replaced with stubs, which may be merely a single line of code to print “module xyz is called and run here”.

Of course, this testing with smoke is not to be confused with a smoke test, in which a new component is connected and launched, just to assert it will power on without “making smoke”, to determine if further testing can or should start.

A system test will verify that a complete smoke detector emits an alarm of a certain decibel level, when surrounded by smoke containing a certain carbon monoxide concentration level.

An acceptance test, following a system test, will validate that the smoke detector is suitable to protect a particular household from fire, that the alarm can wake up the residents through closed doors, and ideally will not report false positives when triggered by smoke from cooking, ashtrays, incense, and what not, given the layout of the house and the residents’ lifestyle.

It is cumbersome and dangerous to test a smoke detector with real smoke as few homeowners do this regularly. They do periodic integration tests, and rely on unit tests. Besides, many home smoke detectors get unintended system tests, when smoke from regular kitchen cooking triggers the alarm.

Fred Murphy
Fred Murphy grew up in Menlo Park where he started programming on a TRS-80. He received his degree in Computer Science from Loyola University in Maryland. Fred has done Software Quality Assurance Contracting around Silicon Valley at companies such as: Apple, Intel, Adobe, KLA Tencor, LogiGear, and many others. His hobbies include bicycling, classical keyboarding, and coding in C and Python. Fred currently lives in Mountain View.
Fred Murphy on Linkedin

The Related Post

  Explore It! is one of the very best software testing books ever written. It is packed with great ideas and Elisabeth Hendrickson’s writing style makes it very enjoyable to read. Hendrickson has a well-deserved reputation in the global software testing community as someone who has the enviable ability to clearly communicate highly-practical, well-thought-out ideas. ...
I’ve been intending to write a book review of How We Test Software At Microsoft, by Alan Page, Ken Johnston, and Bj Rollison, but for whatever reason I just never found the time, until now. In general, I like this book a lot. It’s a nice blend of the tactical and the strategic, of the ...
People who follow me on twitter or via my blog might be aware that I have a wide range of interests in areas outside my normal testing job. I like to research and learn different things, especially psychology and see if it may benefit and improve my skills and approaches during my normal testing job. ...
Introduction Keyword-driven methodologies like Action Based Testing (ABT) are usually considered to be an Automation technique. They are commonly positioned as an advanced and practical alternative to other techniques like to “record & playback” or “scripting”.
Trying to understand why fails, errors, or warnings occur in your automated tests can be quite frustrating. TestArchitect relieves this pain.  Debugging blindly can be tedious work—especially when your test tool does most of its work through the user interface (UI). Moreover, bugs can sometimes be hard to replicate when single-stepping through a test procedure. ...
Jeff Offutt – Professor of Software Engineering in the Volgenau School of Information Technology at George Mason University – homepage – and editor-in-chief of Wiley’s journal of Software Testing, Verification and Reliability, LogiGear: How did you get into software testing? What do you find interesting about it? Professor Offutt: When I started college I didn’t ...
With this edition of LogiGear Magazine, we introduce a new feature, Mind Map. A mind map is a diagram, usually devoted to a single concept, used to visually organize related information, often in a hierarchical or interconnected, web-like fashion. This edition’s mind map, created by Sudhamshu Rao, focuses on tools that are available to help ...
First, let me ask you a few questions. Are your bugs often rejected? Are your bugs often assigned back to you and discussed back and forth to clarify information? Do your leaders or managers often complain about your bugs?
LogiGear Magazine March Issue 2018: Under Construction: Test Methods & Strategy
“Combinatorial testing can detect hard-to-find software faults more efficiently than manual test case selection methods.” Developers of large data-intensive software often notice an interesting—though not surprising—phenomenon: When usage of an application jumps dramatically, components that have operated for months without trouble suddenly develop previously undetected errors. For example, newly added customers may have account records ...
With complex software systems, you can never test all of the functionality in all of the conditions that your customers will see. Start with this as a fact: You will never test enough! Step 2 in getting started is to read and re-read The Art of Software Testing by Glenford Myers. This classic will set the ...
March Issue 2019: Leading the Charge with Better Test Methods

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe