Four Fundamental Requirements of Successful Testing in the Cloud – Part I

Internet-based per-use service models are turning things upside down in the software development industry, prompting rapid expansion in the development of some products and measurable reduction in others. (Gartner, August 2008) This global transition toward computing “in the Cloud” introduces a whole new level of challenge when it comes to software testing.

Cloud computing, when it is done well, provides a reliable and single point of access for users. Consistent, positive user experience sells the service, and rigorous testing assures a quality experience. In order to produce reliable, effective results for users of many walks of life, exacting software testing standards must be met.

In a series of articles over the next several months, LogiGear will identify four fundamental requirements by which software testing in the cloud can uphold these standards:
Four Fundamental Requirements for Successful Testing in the Cloud:

01: Constantly Test Everything – the Right Way
02: Know What’s Where – and Prove It
03: Define Your Paradigm
04: Don’t Underestimate the Importance of Positive User Experience

In this issue’s article, we address:

Requirement 01: Constantly Test Everything – the Right Way

The Cloud demands that we be as nimble as possible, delivering features and fixes in almost real-time fashion. Both customer and provider rely on software development that can maintain quality while being light on its feet and constantly moving. In addition, Cloud-oriented systems tend to be highly complex and dynamic in structure — more than our industry has ever seen before.

The traditional software development and testing models do not support this constant “diligence in motion”; therefore a new Cloud delivery model must be adopted. Traditional models worked reasonably well in the world of client / server, since users were most often internal resources. User experience was downplayed, and glitches tolerated.

The lengthy cycle for requirements generation and feature development, followed by a set of testing cycles, allows for extended periods of time without testing. But these gaps do not correlate with the needs of Cloud consumers. For them, ongoing, reliable, uninterrupted experience is everything.

An effective delivery model of software for the Cloud pivots on one key moment – the instance of feature release. A provider or customer must be able to fix or change application features on-the-fly; that is, all tests for this fix or new feature are complete at the moment of feature release.

The only way companies can realistically achieve this model is to have superior test sets, that are fully automated – and to go about automation the right way. Otherwise it can quickly become unachievable and unmanageable.

“In the past 5 years, evaluating millions of tests for our clients, LogiGear has achieved automation percentages of over 95% of all tests.”

When automation efforts fail to achieve high percentages on tests, the method is often considered faulty. But when test automation follows specific and unique guidelines, its success can be measured again and again.

Guidelines for Successful Cloud Test Automation

When an automation team spends a disproportionate amount of time on automating tests, and the resulting automation then ends up covering only about 30% of the tests, the automation policy has failed. A much higher percentage is needed to “test everything always” in Cloud applications. Additionally, automation should not dominate the test process. The actual automation development, and more importantly maintenance effort should only have a modest footprint in terms of time and resources.

While many testing organizations mistakenly approach automation from the perspective of tooling or programming, an approach centered on automation effective test design combined with an agile test development process yields far better results. When done right, the result is a set of automated tests with on-the-fly adaptability that readily meets the facile requirements of testing in the Cloud.

Tools have their place in the process, but frequently steal the center of attention, viewed as panaceas. Primary focus goes to buying and learning “the tool” rather than expending the time, effort and cost involved in revisiting test design. But if a framework and test design process are not established, using a tool is like shooting in the dark — the Ready > Fire > Aim! approach adds up casualties quickly. The plan of attack must be mapped out before a proper weapon can be selected, otherwise “fire” can — and as we have seen, usually will — turn into “backfire”.

Establishing a test design process allows for more possible tests that are readily available, improving development cycles through flexibility. The approach aims to have at least 95 percent of tests automated, and 95 percent of testers’ time spent on test development, not automation.

These tests are not based on regression or bug validation, but are calibrated to find and hunt for bugs, boundary conditions, state transitions, exploratory testing and negative tests.

The test design approach has six essential principles:

1. No more than 5% of all tests should be executed manually

The cost of introducing automation is usually significant. By maximizing the investment, automating a high percentage of test cases, the return of a rewarding payoff is more likely.

2. No more than 5% of all efforts around testing should involve automating the tests

Creating more and better test cases is key to proper test design. When testers spend significant portions of their time programming automation, test cases tend to be shallow, addressing only the basic functionalities of the system.

Allocating time for in-depth development allows testers to write more elaborate cases, using testing techniques such as decision tables or soap opera testing, as well as their imagination (a frequently underestimated asset). The result is better coverage with less effort at the tool end.

3. Test development and automation must be fully separated

To make sure that test cases are sufficiently in-depth, a distinction must be made between the responsibilities of testers and programmers. For successful Cloud test automation, testers must be dedicated only to testing.

4. Test cases must have a clear and differentiated scope

Each test case should be well-defined in its scope and purpose, and together test cases should map out comprehensive coverage while avoiding overlap and omissions.

5. Tests must be written in the right level of abstraction

Tools for conducting tests must be flexible enough to handle both higher business levels and lower user interface (UI) levels on demand.

6. Test methods must be simple

The method used to achieve effective test design, and the subsequent high coverage automation, should be easy and straightforward. Most of all, it should not add to the complexity of automation.

These principles of test design ensure the adaptability required for successful automated testing in the Cloud. Software development companies that have automated most of their testing, focus most of their efforts on testing, have dedicated testers trained to create comprehensive, flexible test design and produce useful, accessible results will be well-equipped for travel at the speed of service.

LogiGear Corporation

LogiGear Corporation provides global solutions for software testing, and offers public and corporate software-testing training programs worldwide through LogiGear University. LogiGear is a leader in the integration of test automation, offshore resources and US project management for fast and cost-effective results. Since 1994, LogiGear has worked with hundreds of companies from the Fortune 500 to early-stage startups, creating unique solutions to exactly meet their needs. With facilities in the US and Vietnam, LogiGear helps companies double their test coverage and improve software quality while reducing testing time and cutting costs.

For more information, contact Joe Hughes + 01 650.572.1400

LogiGear Corporation
LogiGear Corporation provides global solutions for software testing, and offers public and corporate software testing training programs worldwide through LogiGear University. LogiGear is a leader in the integration of test automation, offshore resources and US project management for fast, cost-effective results. Since 1994, LogiGear has worked with Fortune 500 companies to early-stage start-ups in, creating unique solutions to meet their clients’ needs. With facilities in the US and Viet Nam, LogiGear helps companies double their test coverage and improve software quality while reducing testing time and cutting costs.

The Related Post

When You’re Out to Fix Bottlenecks, Be Sure You’re Able to Distinguish Them From System Failures and Slow Spots Bottlenecks are likely to be lurking in your application. Here’s how you as a performance tester can find them. This article first appeared in Software Test & Performance, May 2005. So you found an odd pattern ...
Introduction Many companies have come to realize that software testing is much more than a task that happens at the end of a software development cycle. They have come to understand that software testing is a strategic imperative and a discipline that can have a substantial impact on the success of an organization that develops ...
D. Richard Kuhn – Computer Scientist, National Institute of Standards & Technology LogiGear: How did you get into software testing? What did you find interesting about it? Mr. Kuhn: About 10 years ago Dolores Wallace and I were investigating the causes of software failures in medical devices, using 15 years of data from the FDA. ...
LogiGear Magazine March Issue 2021: Metrics & Measurements: LogiGear’s Guide to QA Reporting and ROI
Alexa Voice Service (AVS): Amazon’s service offering for a voice-controlled AI assistant. Offered in different products. Source: https://whatis.techtarget.com/definition/Alexa-Voice-Services-AVS Autopilot Short for “automatic pilot,” a device for keeping an aircraft on a set course without the intervention of the pilot. Source: https://en.oxforddictionaries.com/definition/us/automatic_pilot Blockchain Infrastructure: A complex, decentralized architecture that orchestrates many systems running asynchronously over the ...
Having developed software for nearly fifteen years, I remember the dark days before testing was all the rage and the large number of bugs that had to be arduously found and fixed manually. The next step was nervously releasing the code without the safety net of a test bed and having no idea if one ...
Has this ever happened to you: You’ve been testing for a while, perhaps building off of a branch, only to find out that, after all of this time, there is something big wrong. It’s a bad build and now you have to go backwards, fix something, and get a new build. Basically, you just wasted ...
Dr. Cem Kaner – Director, Center for Software Testing Education & Research, Florida Institute of Technology PC World Vietnam: What did you think of VISTACON 2010? Dr. Kaner: I am very impressed that the event was very professionally organized and happy to meet my old colleagues to share and exchange more about our area of ...
March Issue 2020: Smarter Testing Strategies for The Modern SDLC
Introduction Keyword-driven testing is a software testing technique that separates much of the programming work of test automation from the actual test design. This allows tests to be developed earlier and makes the tests easier to maintain. Some key concepts in keyword driven testing include:
Introduction This article discusses the all-too-common occurrence of the time needed to perform Software Testing being short changed as specification, development, and unforeseen “issues” cause the phases prior to testing to expand. The result is that extreme pressure is placed upon the testing organization to perform the testing function within a reduced time frame. The ...
Differences in interpretation of requirements and specifications by programmers and testers is a common source of bugs. For many, perhaps most, development teams the terms requirement and specification are used interchangeably with no detrimental effect. In everyday development conversations the terms are used synonymously, one is as likely to mean the “spec” as the “requirements.”

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe