Smoke vs. Sanity Testing


There are few topics in quality assurance testing that cause as much confusion as smoke testing versus sanity testing. The two names would seem to describe very different practices— and they do! But people still get them confused, since the distinction is somewhat subtle.

Whether you are developing a mobile app, a web service, or the Internet of Things, you will probably undertake smoke as well as sanity testing along the way, likely in that order. Smoke testing is a more generalized, high-level approach to testing, while sanity testing is more particular and focused on logical details.

Let’s take a look at each one in more depth:

Smoke testing

The first thing you may be wondering about is: Why the name “smoke testing”? The name is certainly unusual, but it makes sense. In fact, the term originates with hardware testing. Test engineers who turn on a PC, server or storage appliance check for literal smoke coming from the components once the power is running. If no smoke is detected, the test is passed; if not, all other project-related work must be put on hold until the unit passes the test.

As we can see, the idea is to verify that the most basic functionality— is operating properly before additional testing is undertaken. In the case of hardware the ability to power on without catching fire, as well as to successfully start up and interact with required libraries and software services, is the what the smoke test evaluates.

Smoke testing usually takes place at the beginning of the software testing lifecycle. It verifies the quality of a build— i.e., a collection of files that make up (or “comprise”) a program— and checks to see if basic tasks can be properly executed. The idea is to ensure that the initial build is stable; if the build cannot pass a smoke test, the program must be reconstructed before the testing phase can resume. Some organizations refer to smoke testing as build verification testing.

“In smoke testing, the test cases chosen cover the most important functionality or component of the system,” explained a guide from Guru99. “The objective is not to perform exhaustive testing, but to verify that the critical functionalities of the system [are] working fine. For example, a typical smoke test would be to verify that the application launches successfully, check that the GUI is responsive, etc.”

Smoke tests reveal plainly recognizable deficiencies that could severely throw a release off schedule. By running a group of test cases that cover the most essential components, tests can determine whether critical functionalities behave as needed. At times, smoke tests may uncover the need for more granular testing, such as a sanity test.

An additional function of smoke tests is to assess new builds on whether the program construct is testable, covering such questions as “How well does the program run?” or “How well does the application interface with the system?” The test reveals whether the functionality is so obstructed that is unprepared for testing that delves more deeply into the software functions.

Performing smoke tests

A smoke test can be performed manually or it can be automated. QA teams can therefore create manual test cases, or come up with scripts to automatically check if the software can be installed and launched without incident. An enterprise test management suite is the best resource to help with your smoke tests.

A smoke test is most effective when a preliminary code review that focuses on code changes, has been performed. In this way code quality is best assured, better ensuring against coding defects. Subsequent to code review, the smoke tests checks the changes in coding; assesses how changes affect software functionality; and generally verifies that dependencies are not adversely affected.

Performing sanity testing

Sanity testing, generally performed subsequent to smoke tests, is sometimes called a sanity check. Like a literal sanity check, it is meant to be less than exhaustive. Instead, sanity tests verify that recent upgrades are not causing any major problems. The “sanity” in the name refers to an assurance that the application has been rationally and sanely developed or updated.

The basics of sanity testing differ from smoke testing, as well as from acceptance testing, of which sanity testing is categorized as a subset. Acceptance testing is a much more thorough testing process. Smoke testing is more generic.

Sanity testing is usually done near the end of a test cycle, to ascertain whether bugs have been fixed and whether minor changes to the code are well tolerated. The test is typically executed after receiving a new build, to determine whether the most recent fixes break any component functionality. Sanity tests are often unscripted and may take a “narrow and deep” approach as opposed to the “wide and shallow” route of smoke testing.

While a smoke test can determine whether an application is constructed well, a sanity test helps determine that an app can fundamentally function well. One example of a sanity test is one used to determine whether a calculator app can give a correct result for 2 + 2. If the component function cannot return a result of 4, the process has failed and there is no point yet in doing further tests on the programmed ability to handle more advanced activities, such as trigonometric functions. Sanity tests can be performed manually, or with the help of automated tools.

The sanity test evaluates rational processes within the application. Therefore, the goal of the test is to ensure that obviously false results are not present in component processes, for a speedier testing process than granular in depth testing. Possibly prior to a more intense set of tests, a sanity test is a concise scrutiny of a program which broadly assure that components bring about expected results without in depth analytics.

As we can see, there is some overlap between smoke testing and sanity testing, especially when it comes to the fact that neither is really designed to be a thorough process. However, there are also obvious and important differences.

QA teams and developers use smoke tests, and QA teams use sanity tests, to determine in a timely manner whether an application is sound and solid. The best time to perform smoke tests is during a daily build. Testing at the component level, rather than the level of ‘done’, catches deficiencies that could otherwise remain undetected, embedded in the build.

Smoke TestingSanity Testing
Smoke Testing is performed to ascertain that the critical functionalities of the program are working fineSanity Testing is done to check that new functionality / bugs have been fixed
The objective of smoke testing is to verify the “stability” of the system in order to proceed with more rigorous testingThe objective of sanity testing is to verify the “rationality” of the system in order to proceed with more rigorous testing
Smoke testing is performed by developers as well as testersSanity testing is usually performed by testers alone
Smoke testing is usually documented or scriptedSanity testing is usually undocumented and unscripted
Smoke testing is a subset of regression testingSanity testing is a subset of Acceptance testing
Smoke testing exercises the entire system from end to endSanity testing exercises a particular component of the entire system
Smoke testing is a general health checkSanity Testing is a specialized health check

Automated test management can significantly augment both smoke and sanity tests. Automated testing is most often generated by the build process. Smoke tests are initial to testing the software build, followed by sanity testing. The thoroughness of both smoke and sanity tests are dependent upon the accurate coverage provided by the test cases, or test suites, designed for each.

Developers and testers rely on smoke and sanity testing to move through application development and deployment with as few delays and technical errors as possible. Smoke testing especially identifies issues of integration. Using smoke tests, fundamental problems are discovered early, enhancing confidence that upgrades to the application have not obstructed essential functions.

Sanity tests provide the summary testing of a software product to ensure that the application logically produces expected results for a successful outcome. At the point in which a sanity test is performed, the software product has already passed other fundamental and related tests. With a quick evaluation of the logical quality of software functions, sanity tests help determine software eligibility.

Overall, we can look at smoke testing and sanity testing as being similar processes at the opposite ends of a test cycle. Smoke testing ensures that the fundamentals of the software are sound so that more in-depth testing can be conducted, while sanity testing looks back to see whether the changes or innovations made after additional development and testing generally broke anything.

Smoke tests, Performance tests, and the Enterprise

Of utmost importance to the enterprise is that software performance target customer requirements. Both smoke and sanity tests cover the software product in a timely manner to mitigate the risk of poor customer engagement. Test cases can be written that apply to varying real world business challenges, while automated reporting allows QA teams to quickly assess such attributes as accuracy, capacity, and performance.

By comparing the performance of updated software with the application’s previous performance, both smoke and sanity tests broadly cover the product’s anticipated operations. Coverage must include a surface assessment of the efficiency with which software products interface with systems, servers, and platforms. Comparisons with the most recent release also allow generalized test coverage to quickly spot discrepancies, especially those which involve the programmed build or logic that support software operations. The manner in which smoke and sanity tests can combine to expedite deployment mitigates risk to the enterprise, with contributions to increased ROI and reduced time to market.

Sanjay Zalavadia
As the VP of Client Service for Zephyr, Sanjay brings over 15 years of leadership experience in IT and Technical Support Services. Throughout his career, Sanjay has successfully established and grown premier IT and Support Services teams across multiple geographies for both large and small companies. Most recently, he was Associate Vice President at Patni Computers (NYSE: PTI) responsible for the Telecoms IT Managed Services Practice where he established IT Operations teams supporting Virgin Mobile, ESPN Mobile, Disney Mobile and Carphone Warehouse. Prior to this Sanjay was responsible for Global Technical Support at Bay Networks, a leading routing and switching vendor, which was acquired by Nortel. Sanjay has also held management positions in Support Service organizations at start-up Silicon Valley Networks, a vendor of Test Management software, and SynOptics.

The Related Post

LogiGear Magazine – September 2010
Test Strategy A test strategy describes how the test effort will reach the quality goals set out by the development team. Sometimes called the test approach, test strategy includes, among other things, the testing objective, methods and techniques of testing and the testing environment.
As our world continues its digital transformation with excitement in the advancement and convergence of so many technologies- from AI, machine learning, big data and analytics, to device mesh connectivity, nor should we forget VR and AR- 2017 promises to be a year that further transforms the way we work, play and take care of ...
This article was developed from concepts in the book Global Software Test Automation: A Discussion of Software Testing for Executives, by Hung Q. Nguyen, Michael Hacket and Brent K. Whitlock Introduction The top 5 pitfalls encountered by managers employing software Test Automation are: Uncertainty and lack of control Poor scalability and maintainability Low Test Automation ...
When configured with a Python harness, TestArchitect can be used to automate testing on software for custom hardware Unlike other proprietary and open source tools, that are able to automate only desktop, or mobile, TestArchitect (TA Test) has the ability to test the software that runs on hardware in the following ways: 1. TA can ...
The path to continuous delivery leads through automation Software testing and verification needs a careful and diligent process of impersonating an end user, trying various usages and input scenarios, comparing and asserting expected behaviours. Directly, the words “careful and diligent” invoke the idea of letting a computer program do the job. Automating certain programmable aspects ...
There is no one recipe to make big testing a big success. It takes planning and careful execution of the various aspects, like test design, infrastructure and organization – a mix that can be different for each situation in which you may find yourself. In writing about big testing, the first question that comes up ...
Mobile usage today is not just a trend but it is an essential shift in how people communicate with each other, interact with the world, and do business. According to a ComScore, in 2014 the number of mobile users surpassed the number of computer users and is showing strong growth over time, towards some point in ...
Understanding the benefits and challenges of Automating ERP is critical. According to SAP, ERP (Enterprise Resource Planning) “is the core processes that are needed to run a company: finance, human resources, manufacturing, supply chain, services, procurement, and others. At its most basic level, ERP integrates these processes into a single system. But new ERP systems ...
June Issue 2019: Testing the Software Car
In order to make the right choices among tools, you must be able to classify them. Otherwise, any choice would be at best haphazard. Without functioning classification, you would not be able to understand new tools fast, nor come up with ideas of using, or creating new tools.

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe