Smoke Testing: An Exhaustive Guide For A Non-Exhaustive Suite

Has this ever happened to you: You’ve been testing for a while, perhaps building off of a branch, only to find out that, after all of this time, there is something big wrong. It’s a bad build and now you have to go backwards, fix something, and get a new build. Basically, you just wasted your time. The answer here is usually a resounding yes. Smoke tests are designed to help you in this exact situation.

That question is how I always begin discussions on Smoke Testing, by keeping it simple. Smoke tests are about built validation. And bad builds happen––it’s part of development. It’s not necessarily a bad thing that builds break. It may mean that you are actually making good progress.

So, if you identify with this situation (which you probably do), or you’re just looking to further your knowledge, stay tuned to find out about all of the ins and outs of Smoke Testing, including best practices, recommendations, and more.

What is Smoke Testing?

Smoke Testing is one of the most crucial and versatile types of testing available as a tool for a quick assessment of the software build. The demand for good, fast, and effective Smoke Testing is now greater than ever. Every new, sophisticated development paradigm, from CI and CD to containers, needs a smoke test suite. But what exactly is it? What goes into a smoke test suite? And what does it actually do? This is what we will sort out in this article.

Simply put, the smoke test suite is a set of tests you run against a build to see if it’s able to be tested. This test suite says, “Yes, this is a good enough build to progress on to the next task. Should we keep testing?” In the old days, this was called a build validation test (BVT).

One of the first instances I recall of the term “smoke test” being used was by Stephen McConnell. It’s quite possible that it was used before this instance, but by this point, it was being used frequently.

Smoke Testing is about validating the build. In the Dot.Com Era, we sometimes called it a Launch and Load Test: Can you launch an application? If you go to a webpage does the whole page load? Did every ActiveX control load? Is it testable? Can we proceed? I have worked on various hardware products here in Silicon Valley. In the very early days at Palm Computing working on Palm Pilots, we sometimes got a build that was a piece of plywood with various hardware nailed to the plywood, and then we would plug it in. If it smoked, it failed. If it didn’t smoke, we could keep going.

Why Builds Break

Do you want to go fast? Things may break. And to some people, breaking is good––you just want to know it right away! That is what a smoke test is for.

If you read a book on new development paradigms or talk to some of the thought leaders in the development world today, at some point they will bring up the idea of fail fast; they could also bring up the idea of innovation, and they will probably also talk about speed. Experimenting might be frightening to those who think that software development is a science and not an art. But, talk to developers without a technical manager in the room and they will agree: A lot of times, you just try something and see if it works. You want to know right away if it does or does not work, that way you can try something else, especially if it’s a problem area. Part of this experimenting is knowing that sometimes things break—that’s all part of innovation and optimization. So, I’ll want to experiment to find out quickly if it works or not. If it works, I’m done! If it doesn’t, run another, and see if that works.

It’s comparable to the Lean Principle of Deliver as Fast as Possible. Test teams need to give immediate feedback. If a programmer runs an experiment, a key part of Continuous Integration is that we get them a qualified build as soon as we can. So, you pull a build and run a fast smoke test to qualify said build.

What Smoke Testing is Not

Smoke Testing is an often misunderstood phrase in software development, particularly by non-technical folks on the product side. If I say to the team at a daily standup meeting that the smoke test failed, does everybody on the team know what that means? It simply means that we need a new build because this one failed; it doesn’t mean we have major regressions, or that the product is in worse shape, or that a lot of things broke… it just means we need a new build.

A smoke test will never tell you, “Everything works,” or, “Everything is okay.” It also will not tell you, “Nothing broke.” A smoke test is not an exhaustive test–it is not your full regression. All that a smoke test will tell you is that the build is good and that you can move on to the next development phase.

Don’t Get Hung Up by the Name

Many people get hung up on the differences in nomenclature between BVT, smoke tests, and sanity tests. The purpose of this discussion is to better understand why we need to smoke test. Let’s not get hung up on the name of this suite.

I never use the phrase sanity test because it doesn’t really have a historical reference in Software Testing. It’s a newer phrase that always has a local meaning to the company who is using it. I have even been in larger organizations where the word “sanity” has multiple meanings across the platform. There is absolutely no standard on what the word “sanity” means––it’s insanity! So I prefer to not use it.

Over the years, I have come across some heated discussions regarding the use of a sanity test versus a smoke test, or the real definition of an acceptance test, or what a regression suite actually involves. It’s less important to settle on the name of the test than it is to actually understand the test goal. A common problem I come across in my consulting work is people haphazardly naming various tests, which in turn causes a lot of confusion and misunderstood information. This is the crux of the problem we need to solve. The name itself does not matter. What truly matters is having a common understanding across the team. At this point, I want to repeat what I said previously: Passing the smoke test means you have a good build; it does not mean that there are no bugs.

If your team wants to call Smoke Testing something else–good for you! Just make sure that everybody on the team understands what the various tests suites do, as well as what they mean. If you want to call what I described as a smoke test a sanity test–be my guest! If you want to call it a build validation test–fantastic. Just make sure that no matter what you call the set of tests I’m describing, everyone on the team understands their purpose is to validate the build.

Is Classifying Tests Really that Important?

Yes, it actually is. For communication, transparency, coverage, analyzing risk, test planning–all the things that make up strategy–making sure that your team communicates correctly and shares a mutual understanding is absolutely essential. Test types are also the easiest way to define a test strategy.

There are some clear-cut differentiations in types of testing, such as Performance Testing, Accessibility Testing, and Localization Testing. But in the realm of defining Functional Test types, the water gets a little murky… and this is where problems can arise. What is UI Testing versus Usability Testing? What is Functional Testing versus Feature Testing? Can you define or even differentiate between the two? Try to clearly differentiate between Functional Tests, Feature Tests, Workflow Tests, and End-to-End Tests.  

Any misunderstanding or vagueness in what these words actually mean can give people a wrong sense of security of what is and is not being tested. It is the goals of each of these test types that differentiate them and provide the team with different information, different confidence, and a different level of maturity. We’ve created a mini-glossary for you (below) in order to help clarify the differences between similar types of testing.

Figure 1 works to display this nomenclature debacle in terms of candy––so many chocolate bars have the same basic ingredients, but have so many different names. The same thing happens in regards to testing “ingredients.”

There is very often overlap. You could have the same test run in multiple suites. You may have your smoke tests be a small subset of your full regression. You could have a workflow test as part of your UI suite, as well as your usability suite.

Figure 1: What do candy and testing have in common? It doesn’t always matter what you call them–as long as there is shared understanding! Source

Should Smoke Tests be Automated?

At this point in software development, I’m obligated to say yes. There was a time where some companies ran manual smoke tests for some reason or another. But with present-day Automation tools being so good and most Test Engineers being more skilled at Automation, it’s hard to justify why resources are not committed to automating such a beneficial suite of tests. I suggest that it be the first test suite you automate because you will use it right away. It needs to be easy to run, and be able to be run by someone else or a tool, such as Jenkins or Team City.

Time: Is a Smoke Test the Same as a Regression Test?

The major attribute of Smoke Testing is time. You run smoke tests to deliver results as fast as possible and to build often. Some teams have a goal to break often–build it, break it, fix it (BIBIFI) is a development philosophy. This means that since an overnight regression test does not deliver immediate results, it’s not a smoke test. The old rule of thumb is that smoke test results have to be delivered within an hour. Think of it this way, if I commit a chunk of code, I want to find out right away if it worked while it’s on my mind and while I’m working on it because if it didn’t work, I want to fix it right away. I don’t want to commit a chunk of code and wait until the next day to find out that information. At that point, I would be working on other things and have to switch tasks back to yesterday’s work––that is not delivering results as fast as possible.

Time: Automated Smoke Tests

Automation is diverse. There are many varieties of Automation. Every organization has its own unique flavor and chemistry for what works and what doesn’t in Automation. Knowing that, time matters more than numbers. If you have 500 tests automated and they take 15 minutes to run, that sounds great for a smoke test. If you have 500 tests and they take overnight to run, that is not a smoke test. This is another place where smoke tests get complicated. I have a client who has 10,000 automated tests in their suite; because of how what, and at what level they automated, it takes them less than 30 minutes to run all 10,000 tests. Thus, their regression suite is the same as a smoke test because (a) they run all tests against every build, and (b) time. I have another client also with about 10,000 tests; because of how their Automation process, it takes overnight to run. They call it their regression suite–it’s not their smoke test.

If you have only a few tests automated and it takes 20 minutes to run all of your Automation, go ahead and run them all. But if, like many companies, you have an overnight run of 100,000 tests across various platforms and devices, clearly, you cannot run all of these every time you do a build if the goal of CI/CD is multiple builds per day. Many organizations have a small, fast subset of tests for the purpose of smoke tests. You can’t judge a smoke test by the number of tests–it’s the time to execute.

One more aspect of this analysis is that it has to be fast. If you run a suite of tests of whatever size and regularly get a plethora of failures that you have to analyze to figure out why they failed, then that does not give you the immediate results a smoke test needs.

What Goes into a Smoke Test Suite?

There are a few guidelines I like to give for tests that go into a smoke test suite. For the most part, you want tests that can run very fast with low maintenance; this way, when a test fails, it’s easy to diagnose, so you know why it failed and you know what to fix.

You’ll also want to include a few big, long End-to-End (E2E) cases because they are often the perfect integration tests. But wait… you may ask, “Michael, didn’t you just say that I should include tests that run quickly with little maintenance? E2E Workflow tests like these break often, are often difficult to diagnose, and are costly and problematic to maintain!” Yes, exactly! These tests will provide you with the exact information you need. Smoke Testing is more so at the level of Integration Testing than it is isolated Functional Testing—your unit tests should catch breaks or fails at the individual function level. A smoke test should tell you that, at a higher level, everything is playing together nicely. So, yes—include some End-to-End scenarios… but just a few.

When it comes to Smoke Testing, there is no magic recipe. You just need to ensure that you have some balance in your smoke test suite—you can decide for yourself what tests to include for your specific needs.

Here are some ideas to include or consider:

  • Unit Tests. These run fast, and when they break, it is often clear as to why they broke; they also have a low maintenance cost. Owned by Developers.
  • Higher-Level Tests: Tests such as service/API level, integration tests, path tests, workflow tests, or scenario tests; they may take a bit longer to write and a bit longer to run, and may be unclear as to why they broke.
  • User Interface (UI) Tests: These generally will take longer to write, longer to run, and, when they break, even longer to diagnose and isolate the issue.

It must be noted that smoke tests are usually not at the unit level—since they are so fast, all these can be run at any time. For smoke tests, the build could pass all of the unit tests, but if I accept the build solely from passing unit tests, I run the risk of the integration breaking—I could never know this from the unit tests. You want to maintain a balance of not too long to run and enough testing coverage to say, “Yes, this is a testable build!”

Other Uses for Smoke Tests

If you’re doing Continuous Integration (CI), you need to have an automated smoke test suite—CI is essentially based on having one. The same principle applies if you’re doing Continuous Delivery (CD). You need an automated, fast suite of easily understood and easily maintained tests in order to show consistency from server to server along the deployment pipeline. If your team uses containers, you’ll want to perform a bit more testing on the changed container, but you’ll also want a fast suite of smoke tests to perform across the entire system in order to ensure everything is playing together nicely.

Summarizing with a Story

Do you feel like you now have a better understanding of what Smoke Testing is? Let me go back to the first time I used Smoke Testing on a project of mine. I was leading a project in San Jose, CA—the heart of Silicon Valley. We had all of our developers onsite in San Jose; however, we had all of our Test Engineers in Bangalore, India. We were getting weekly builds; every Friday afternoon, the Build Engineer would upload a build to the server, then we would go home for the weekend. Monday morning, the San Jose team would come back into the office, and every week, we expected a bunch of old bugs closed and a bunch of new bugs opened—Monday was one of our busiest days. But every once in a while (and sometimes regularly), I would get into the office on Monday morning, open up my email, and get an email saying the build failed.

The big problem here was that by the time I saw this email, the office in India had already closed and everyone was home for the night. There was nothing we could do, and nothing got done that day. The whole day was a waste—we lost 20% of our workweek! So, what we did to fix this was Friday afternoon, after the Build Engineer was done uploading the build, we would jump on the build and test it for around 30 or 45 minutes; this way, we could say, “Okay, this is a good build,” and continue to put it on the server, or we could say, “No, this is a bad build,” and either fix it, or just wait until Monday. The test team would continue on the existing build and get a new build Tuesday without wasting Monday.

This didn’t waste anybody’s time. The new testing process had to be really fast because on a Friday night, we certainly did not want to stay in the office very long. So, we ran a bunch of fast, high-level tests just to make sure we didn’t waste next Monday and say that it’s a good build. At the time, it was Manual Testing. At first, we called it a build verification test, and then transitioned to calling it a smoke test. Then, when we began automating tests, that test suite became the first one we automated, and we got to leave work early on a Friday night!

Conclusion

The demand for effective Smoke Testing is now greater than ever. A smoke test suite is a quick and effective way to assess your progress on your build, and to see if you are able to proceed further or promote a build to the next stage. The first thing you need to do is sit with your team and get every person on your team on the same page regarding what Smoke Testing is for your organization. Remember: there is no magically perfect smoke test suite. Once you do this, you can develop an automated, effective, and fast smoke test suite that can help your team both build and progress faster.

Michael Hackett
Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006). He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

The Related Post

We’ve scoured the internet to search for videos that provide a wealth of knowledge about Test Automation. We curated this short-list of videos that cover everything from the basics, to the more advanced, and why Test Automation should be part of part of any software development organization. Automation Testing Tutorial for Beginners This tutorial introduces ...
From automotive Software Testing standards, testing techniques, and process, this article is an in-depth guide for those looking to transfer their existing skills to this exciting industry. For the Software Car, autonomous driving gets most of the hype, but most overlook the fact that there is so much more to Software Testing for the automotive ...
LogiGear_Magazine–March_2015–Testing_Strategies_and_Methods-Fast_Forward_To_Better_Testing
This article was developed from concepts in the book Global Software Test Automation: Discussion of Software Testing for Executives. Quality cost is the sum of all costs a company invests into the release of a quality product. When developing a software product, there are 4 types of quality costs: prevention costs, appraisal costs, internal failure ...
Introduction A common issue that I come across in projects is the relationship between test automation and programming. In this article I want to highlight some of the differences that I feel exist between the two.
This article first appeared in BETTER SOFTWARE, May/June 2005. Executives and managers, get your performance testing teams out of the pit and ahead of the pack Introduction As an activity, performance testing is widely misunderstood, particularly by executives and managers. This misunderstanding can cause a variety of difficulties-including outright project failure. This article details the ...
When Netflix decided to enter the Android ecosystem, we faced a daunting set of challenges: 1. We wanted to release rapidly (every 6-8 weeks). 2. There were hundreds of Android devices of different shapes, versions, capacities, and specifications which need to playback audio and video. 3. We wanted to keep the team small and happy. ...
When it is out of the question to delay delivery, the solution is a prioritization strategy in order to do the best possible job within the time constraints. The scenario is as follows: You are the test manager. You made a plan and a budget for testing. Your plans were, as far as you know, ...
Bringing in experts can set you up for automation success. Test automation isn’t easy when your testing gets beyond a few hundred test cases. Lots of brilliant testers and large organizations have, and continue to struggle with test automation, and not for lack of effort. Everyone understands the value of test automation, but few testing ...
Divide and conquer was a strategy successfully employed by ancient Persian kings against their Greek enemies. It is a strategy that can still be used successfully today. Fundamentally, by dividing something into smaller more manageable pieces (in the case of the ancient Persians, they divided the Greek city states), it becomes much more manageable.
This article was originally featured in the July/August 2009 issue of Better Software magazine. Read the entire issue or become a subscriber. People often quote Lord Kelvin: “I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot express ...
With this edition of LogiGear Magazine, we introduce a new feature, Mind Map. A mind map is a diagram, usually devoted to a single concept, used to visually organize related information, often in a hierarchical or interconnected, web-like fashion. This edition’s mind map, created by Sudhamshu Rao, focuses on tools that are available to help ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe