Smoke Testing: An Exhaustive Guide For A Non-Exhaustive Suite

Has this ever happened to you: You’ve been testing for a while, perhaps building off of a branch, only to find out that, after all of this time, there is something big wrong. It’s a bad build and now you have to go backwards, fix something, and get a new build. Basically, you just wasted your time. The answer here is usually a resounding yes. Smoke tests are designed to help you in this exact situation.

That question is how I always begin discussions on Smoke Testing, by keeping it simple. Smoke tests are about built validation. And bad builds happen––it’s part of development. It’s not necessarily a bad thing that builds break. It may mean that you are actually making good progress.

So, if you identify with this situation (which you probably do), or you’re just looking to further your knowledge, stay tuned to find out about all of the ins and outs of Smoke Testing, including best practices, recommendations, and more.

What is Smoke Testing?

Smoke Testing is one of the most crucial and versatile types of testing available as a tool for a quick assessment of the software build. The demand for good, fast, and effective Smoke Testing is now greater than ever. Every new, sophisticated development paradigm, from CI and CD to containers, needs a smoke test suite. But what exactly is it? What goes into a smoke test suite? And what does it actually do? This is what we will sort out in this article.

Simply put, the smoke test suite is a set of tests you run against a build to see if it’s able to be tested. This test suite says, “Yes, this is a good enough build to progress on to the next task. Should we keep testing?” In the old days, this was called a build validation test (BVT).

One of the first instances I recall of the term “smoke test” being used was by Stephen McConnell. It’s quite possible that it was used before this instance, but by this point, it was being used frequently.

Smoke Testing is about validating the build. In the Dot.Com Era, we sometimes called it a Launch and Load Test: Can you launch an application? If you go to a webpage does the whole page load? Did every ActiveX control load? Is it testable? Can we proceed? I have worked on various hardware products here in Silicon Valley. In the very early days at Palm Computing working on Palm Pilots, we sometimes got a build that was a piece of plywood with various hardware nailed to the plywood, and then we would plug it in. If it smoked, it failed. If it didn’t smoke, we could keep going.

Why Builds Break

Do you want to go fast? Things may break. And to some people, breaking is good––you just want to know it right away! That is what a smoke test is for.

If you read a book on new development paradigms or talk to some of the thought leaders in the development world today, at some point they will bring up the idea of fail fast; they could also bring up the idea of innovation, and they will probably also talk about speed. Experimenting might be frightening to those who think that software development is a science and not an art. But, talk to developers without a technical manager in the room and they will agree: A lot of times, you just try something and see if it works. You want to know right away if it does or does not work, that way you can try something else, especially if it’s a problem area. Part of this experimenting is knowing that sometimes things break—that’s all part of innovation and optimization. So, I’ll want to experiment to find out quickly if it works or not. If it works, I’m done! If it doesn’t, run another, and see if that works.

It’s comparable to the Lean Principle of Deliver as Fast as Possible. Test teams need to give immediate feedback. If a programmer runs an experiment, a key part of Continuous Integration is that we get them a qualified build as soon as we can. So, you pull a build and run a fast smoke test to qualify said build.

What Smoke Testing is Not

Smoke Testing is an often misunderstood phrase in software development, particularly by non-technical folks on the product side. If I say to the team at a daily standup meeting that the smoke test failed, does everybody on the team know what that means? It simply means that we need a new build because this one failed; it doesn’t mean we have major regressions, or that the product is in worse shape, or that a lot of things broke… it just means we need a new build.

A smoke test will never tell you, “Everything works,” or, “Everything is okay.” It also will not tell you, “Nothing broke.” A smoke test is not an exhaustive test–it is not your full regression. All that a smoke test will tell you is that the build is good and that you can move on to the next development phase.

Don’t Get Hung Up by the Name

Many people get hung up on the differences in nomenclature between BVT, smoke tests, and sanity tests. The purpose of this discussion is to better understand why we need to smoke test. Let’s not get hung up on the name of this suite.

I never use the phrase sanity test because it doesn’t really have a historical reference in Software Testing. It’s a newer phrase that always has a local meaning to the company who is using it. I have even been in larger organizations where the word “sanity” has multiple meanings across the platform. There is absolutely no standard on what the word “sanity” means––it’s insanity! So I prefer to not use it.

Over the years, I have come across some heated discussions regarding the use of a sanity test versus a smoke test, or the real definition of an acceptance test, or what a regression suite actually involves. It’s less important to settle on the name of the test than it is to actually understand the test goal. A common problem I come across in my consulting work is people haphazardly naming various tests, which in turn causes a lot of confusion and misunderstood information. This is the crux of the problem we need to solve. The name itself does not matter. What truly matters is having a common understanding across the team. At this point, I want to repeat what I said previously: Passing the smoke test means you have a good build; it does not mean that there are no bugs.

If your team wants to call Smoke Testing something else–good for you! Just make sure that everybody on the team understands what the various tests suites do, as well as what they mean. If you want to call what I described as a smoke test a sanity test–be my guest! If you want to call it a build validation test–fantastic. Just make sure that no matter what you call the set of tests I’m describing, everyone on the team understands their purpose is to validate the build.

Is Classifying Tests Really that Important?

Yes, it actually is. For communication, transparency, coverage, analyzing risk, test planning–all the things that make up strategy–making sure that your team communicates correctly and shares a mutual understanding is absolutely essential. Test types are also the easiest way to define a test strategy.

There are some clear-cut differentiations in types of testing, such as Performance Testing, Accessibility Testing, and Localization Testing. But in the realm of defining Functional Test types, the water gets a little murky… and this is where problems can arise. What is UI Testing versus Usability Testing? What is Functional Testing versus Feature Testing? Can you define or even differentiate between the two? Try to clearly differentiate between Functional Tests, Feature Tests, Workflow Tests, and End-to-End Tests.  

Any misunderstanding or vagueness in what these words actually mean can give people a wrong sense of security of what is and is not being tested. It is the goals of each of these test types that differentiate them and provide the team with different information, different confidence, and a different level of maturity. We’ve created a mini-glossary for you (below) in order to help clarify the differences between similar types of testing.

Figure 1 works to display this nomenclature debacle in terms of candy––so many chocolate bars have the same basic ingredients, but have so many different names. The same thing happens in regards to testing “ingredients.”

There is very often overlap. You could have the same test run in multiple suites. You may have your smoke tests be a small subset of your full regression. You could have a workflow test as part of your UI suite, as well as your usability suite.

Figure 1: What do candy and testing have in common? It doesn’t always matter what you call them–as long as there is shared understanding! Source

Should Smoke Tests be Automated?

At this point in software development, I’m obligated to say yes. There was a time where some companies ran manual smoke tests for some reason or another. But with present-day Automation tools being so good and most Test Engineers being more skilled at Automation, it’s hard to justify why resources are not committed to automating such a beneficial suite of tests. I suggest that it be the first test suite you automate because you will use it right away. It needs to be easy to run, and be able to be run by someone else or a tool, such as Jenkins or Team City.

Time: Is a Smoke Test the Same as a Regression Test?

The major attribute of Smoke Testing is time. You run smoke tests to deliver results as fast as possible and to build often. Some teams have a goal to break often–build it, break it, fix it (BIBIFI) is a development philosophy. This means that since an overnight regression test does not deliver immediate results, it’s not a smoke test. The old rule of thumb is that smoke test results have to be delivered within an hour. Think of it this way, if I commit a chunk of code, I want to find out right away if it worked while it’s on my mind and while I’m working on it because if it didn’t work, I want to fix it right away. I don’t want to commit a chunk of code and wait until the next day to find out that information. At that point, I would be working on other things and have to switch tasks back to yesterday’s work––that is not delivering results as fast as possible.

Time: Automated Smoke Tests

Automation is diverse. There are many varieties of Automation. Every organization has its own unique flavor and chemistry for what works and what doesn’t in Automation. Knowing that, time matters more than numbers. If you have 500 tests automated and they take 15 minutes to run, that sounds great for a smoke test. If you have 500 tests and they take overnight to run, that is not a smoke test. This is another place where smoke tests get complicated. I have a client who has 10,000 automated tests in their suite; because of how what, and at what level they automated, it takes them less than 30 minutes to run all 10,000 tests. Thus, their regression suite is the same as a smoke test because (a) they run all tests against every build, and (b) time. I have another client also with about 10,000 tests; because of how their Automation process, it takes overnight to run. They call it their regression suite–it’s not their smoke test.

If you have only a few tests automated and it takes 20 minutes to run all of your Automation, go ahead and run them all. But if, like many companies, you have an overnight run of 100,000 tests across various platforms and devices, clearly, you cannot run all of these every time you do a build if the goal of CI/CD is multiple builds per day. Many organizations have a small, fast subset of tests for the purpose of smoke tests. You can’t judge a smoke test by the number of tests–it’s the time to execute.

One more aspect of this analysis is that it has to be fast. If you run a suite of tests of whatever size and regularly get a plethora of failures that you have to analyze to figure out why they failed, then that does not give you the immediate results a smoke test needs.

What Goes into a Smoke Test Suite?

There are a few guidelines I like to give for tests that go into a smoke test suite. For the most part, you want tests that can run very fast with low maintenance; this way, when a test fails, it’s easy to diagnose, so you know why it failed and you know what to fix.

You’ll also want to include a few big, long End-to-End (E2E) cases because they are often the perfect integration tests. But wait… you may ask, “Michael, didn’t you just say that I should include tests that run quickly with little maintenance? E2E Workflow tests like these break often, are often difficult to diagnose, and are costly and problematic to maintain!” Yes, exactly! These tests will provide you with the exact information you need. Smoke Testing is more so at the level of Integration Testing than it is isolated Functional Testing—your unit tests should catch breaks or fails at the individual function level. A smoke test should tell you that, at a higher level, everything is playing together nicely. So, yes—include some End-to-End scenarios… but just a few.

When it comes to Smoke Testing, there is no magic recipe. You just need to ensure that you have some balance in your smoke test suite—you can decide for yourself what tests to include for your specific needs.

Here are some ideas to include or consider:

  • Unit Tests. These run fast, and when they break, it is often clear as to why they broke; they also have a low maintenance cost. Owned by Developers.
  • Higher-Level Tests: Tests such as service/API level, integration tests, path tests, workflow tests, or scenario tests; they may take a bit longer to write and a bit longer to run, and may be unclear as to why they broke.
  • User Interface (UI) Tests: These generally will take longer to write, longer to run, and, when they break, even longer to diagnose and isolate the issue.

It must be noted that smoke tests are usually not at the unit level—since they are so fast, all these can be run at any time. For smoke tests, the build could pass all of the unit tests, but if I accept the build solely from passing unit tests, I run the risk of the integration breaking—I could never know this from the unit tests. You want to maintain a balance of not too long to run and enough testing coverage to say, “Yes, this is a testable build!”

Other Uses for Smoke Tests

If you’re doing Continuous Integration (CI), you need to have an automated smoke test suite—CI is essentially based on having one. The same principle applies if you’re doing Continuous Delivery (CD). You need an automated, fast suite of easily understood and easily maintained tests in order to show consistency from server to server along the deployment pipeline. If your team uses containers, you’ll want to perform a bit more testing on the changed container, but you’ll also want a fast suite of smoke tests to perform across the entire system in order to ensure everything is playing together nicely.

Summarizing with a Story

Do you feel like you now have a better understanding of what Smoke Testing is? Let me go back to the first time I used Smoke Testing on a project of mine. I was leading a project in San Jose, CA—the heart of Silicon Valley. We had all of our developers onsite in San Jose; however, we had all of our Test Engineers in Bangalore, India. We were getting weekly builds; every Friday afternoon, the Build Engineer would upload a build to the server, then we would go home for the weekend. Monday morning, the San Jose team would come back into the office, and every week, we expected a bunch of old bugs closed and a bunch of new bugs opened—Monday was one of our busiest days. But every once in a while (and sometimes regularly), I would get into the office on Monday morning, open up my email, and get an email saying the build failed.

The big problem here was that by the time I saw this email, the office in India had already closed and everyone was home for the night. There was nothing we could do, and nothing got done that day. The whole day was a waste—we lost 20% of our workweek! So, what we did to fix this was Friday afternoon, after the Build Engineer was done uploading the build, we would jump on the build and test it for around 30 or 45 minutes; this way, we could say, “Okay, this is a good build,” and continue to put it on the server, or we could say, “No, this is a bad build,” and either fix it, or just wait until Monday. The test team would continue on the existing build and get a new build Tuesday without wasting Monday.

This didn’t waste anybody’s time. The new testing process had to be really fast because on a Friday night, we certainly did not want to stay in the office very long. So, we ran a bunch of fast, high-level tests just to make sure we didn’t waste next Monday and say that it’s a good build. At the time, it was Manual Testing. At first, we called it a build verification test, and then transitioned to calling it a smoke test. Then, when we began automating tests, that test suite became the first one we automated, and we got to leave work early on a Friday night!


The demand for effective Smoke Testing is now greater than ever. A smoke test suite is a quick and effective way to assess your progress on your build, and to see if you are able to proceed further or promote a build to the next stage. The first thing you need to do is sit with your team and get every person on your team on the same page regarding what Smoke Testing is for your organization. Remember: there is no magically perfect smoke test suite. Once you do this, you can develop an automated, effective, and fast smoke test suite that can help your team both build and progress faster.

Michael Hackett
Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006). He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

The Related Post

March Issue 2020: Smarter Testing Strategies for The Modern SDLC
I recently came back from the Software Testing & Evaluation Summit in Washington, DC hosted by the National Defense Industrial Association. The objective of the workshop is to help recommend policy and guidance changes to the Defense enterprise, focusing on improving practice and productivity of software testing and evaluation (T&E) approaches in Defense acquisition.
*You can check the answer key here
There are many ways to approach test design. These approaches range from checklists to very precise algorithms in which test conditions are combined to achieve the most efficiency in testing. There are situations, such as in testing mobile applications, complex systems and cyber security, where tests need to be creative, cover a lot of functionality, ...
Introduction Software Testing 3.0 is a strategic end-to-end framework for change based upon a strategy to drive testing activities, tool selection, and people development that finally delivers on the promise of software testing. For more details on the evolution of software testing and Software Testing 3.0 see: Software Testing 3.0: Delivering on the Promise of ...
Internet-based per-use service models are turning things upside down in the software development industry, prompting rapid expansion in the development of some products and measurable reduction in others. (Gartner, August 2008) This global transition toward computing “in the Cloud” introduces a whole new level of challenge when it comes to software testing.
I’ve been intending to write a book review of How We Test Software At Microsoft, by Alan Page, Ken Johnston, and Bj Rollison, but for whatever reason I just never found the time, until now. In general, I like this book a lot. It’s a nice blend of the tactical and the strategic, of the ...
When configured with a Python harness, TestArchitect can be used to automate testing on software for custom hardware Unlike other proprietary and open source tools, that are able to automate only desktop, or mobile, TestArchitect (TA Test) has the ability to test the software that runs on hardware in the following ways: 1. TA can ...
People who follow me on twitter or via my blog might be aware that I have a wide range of interests in areas outside my normal testing job. I like to research and learn different things, especially psychology and see if it may benefit and improve my skills and approaches during my normal testing job. ...
There are few topics in quality assurance testing that cause as much confusion as smoke testing versus sanity testing. The two names would seem to describe very different practices— and they do! But people still get them confused, since the distinction is somewhat subtle.
Companies generally consider the software they own, whether it is created in-house or acquired, as an asset (something that could appear on the balance sheet). The production of software impacts the profit and loss accounts for the year it is produced: The resources used to produce the software result in costs, and methods, tools, or ...
“Combinatorial testing can detect hard-to-find software faults more efficiently than manual test case selection methods.” Developers of large data-intensive software often notice an interesting—though not surprising—phenomenon: When usage of an application jumps dramatically, components that have operated for months without trouble suddenly develop previously undetected errors. For example, newly added customers may have account records ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news