Test Design Focused on Expediting Functional Test Automation

Test organizations continue to undergo rapid transformation as demands grow for testing efficiencies. Functional test automation is often seen as a way to increase the overall efficiency of functional and system tests. How can a test organization stage itself for functional test automation before an investment in test automation has even been made? Further, how can you continue to harvest the returns from your test design paradigm once the test automation investment has been made? In this article we will discuss the factors in selecting a test design paradigm that expedites functional test automation. We will recommend a test design paradigm and illustrate how this could be applied to both commercial and open-source automation solutions. Finally, we will discuss how to leverage the appropriate test design paradigm once automation has been implemented in both an agile (adaptive) and waterfall (predictive) system development lifecycle (SDLC).

Test design – selection criteria

The test design selection criteria should be grounded in the fundamental goals of any functional automation initiative. Let us assume the selected test automaton tool shall enable end-users to author, maintain and execute automated test cases in a web-enabled, shareable environment. Furthermore, the test automation tool shall support test case design, automation and execution “best practices” as defined by the test organization. To harvest the maximum return from both test design and test automation the test design paradigm must support:

  • Manual test case design, execution and reporting
  • Automated test case design, execution and reporting
  • Data-driven manual and automated test cases
  • Reuse of test case “steps” or “components”
  • Efficient maintenance of manual and automated test cases

Test design – recommended paradigm

One paradigm that has been gaining momentum under several guises in the last few years is keyword-based test design. I have stated in previous articles that “The keyword concept is founded on the premise that the discrete functional business events that make up any application can be described using a short text description (keyword) and associated parameter value pairs (arguments). By designing keywords to describe discrete functional business events the testers begin to build up a common library of keywords that can be used to create keyword test cases. This is really a process of creating a language (keywords) to describe a sequence of events within the application (test case).”

The Keyword concept is not a silver bullet but it does present a design medium that leads to both effective test case design and ease of automation. Keywords present the opportunity to design test cases in a fashion that supports our previous test design selection criteria. It does not guarantee that these test cases will be effective but it certainly presents the greatest opportunity for success. Leveraging a test design paradigm that is modular and reusable paves the road for long term automation – not only that, it moves most of the maintenance to a higher level of abstraction: the keyword. The keyword name should be a shorthand description of what actions the keyword performs. The keyword name should begin with the action being performed followed by the functional entity followed by descriptive text (if required). Here are several common examples:

  • Logon User – Logon User
  • Enter Customer Name – Enter Customer Name
  • Enter Customer Address – Enter Customer Address
  • Validate Customer Name – Validate Customer Name
  • Select Customer Record – Select Customer Record

Test design – keyword application

Keyword test case design begins as an itemized list of the test cases to be constructed–usually as a set of named test cases. The internal structure of each test case is then constructed using existing (or new) keywords. Once the design is complete, the appropriate test data (input and results) can be added. Testing the keyword test case design involves executing the test case against the application or applications being tested.
At first glance this does not appear to be any different than any other method for test case design but there are significant differences between keyword test case design and any freehand / textual approach to test case design. Keyword test case designs are:

  • Consistent – the same keyword is used to describe the business event every time
  • Data Driven – the keyword contains the data required to perform the test step
  • Self Documenting – the keyword description contains the designers’ intent
  • Maintainable – with consistency comes maintainability
  • Automation — supports automation with little or no design transformation (rewrite)

Test design – adaption based on development/testing paradigm

There are two primary development and testing approaches being used by development organizations today: adaptive (agile) and predictive (waterfall/cascade). Both approaches certainly have their proponents–though the increasingly adaptive (agile) system development lifecycles are gaining precedence. The question becomes how does this affect the test design paradigm? The answer appears to be that it really does not affect the test design paradigm but it does affect the timing.

Predictive (waterfall/cascade) development lifecycles can be supported by a straight-forward design, build, execute and maintain test design paradigm that may later support automation. Eventually, one would expect the predictive testing team to design, build, execute, maintain and automate their test case inventory. This could be accomplished using both Tier 1 commercial automation tools and open source automation tools. As long as the automation tools support modular based design (functions) and data driven testing (test data sources) keyword-based automation can be supported–the most significant difference being the time and effort required to implement the testing framework. Adaptive (agile) development lifecycles come in several flavors–some support immediate keyword-based functional test design and automation while others do not. Agile test driven development (TDD) using FitNesse™, a testing framework which requires instrumentation by and collaboration with the development team, certainly supports keyword-based test case design and automation. Other agile paradigms only support instrumentation at the unit test level or not at all; i.e. a separate keyword-based test case design and automation toolset must be used. The challenge for non-TDD agile becomes designing, building, executing and maintaining functional tests within the context of a two to four week sprint. The solution is a combination of technique and timing. For the immediate changes in the current sprint consider using exploratory testers and an itemized list of test cases with little (if any) content–basically a high-level check list. Once the software for a sprint has migrated to and existed in production for at least one sprint, a traditional set of regression test cases can be constructed using keywords. This separates the challenge into sprint-related testing and regression testing.

David W. Johnson

David W. Johnson “DJ,” is a Senior Test Architect with over 25 years of experience in Information Technology across several business verticals, and has played key roles in business analysis, software design, software development, testing, disaster recovery and post implementation support. Over the past 20 years, he has developed specific expertise in testing and leading QA/Test team transformations — Delivered Test: Architectures, Strategies, Plans, Management, Functional Automation, Performance Automation, Mentoring Programs, and Organizational Assessments.

David Johnson
David W. Johnson “DJ,” is a Senior Test Architect with over 25 years of experience in Information Technology across several business verticals, and has played key roles in business analysis, software design, software development, testing, disaster recovery and post implementation support.

The Related Post

LogiGear Magazine March Testing Essentials Issue 2017
Introduction This 2 article series describes activities that are central to successfully integrating application performance testing into an Agile process. The activities described here specifically target performance specialists who are new to the practice of fully integrating performance testing into an Agile or other iteratively-based process, though many of the concepts and considerations can be ...
The key factors for success when executing your vision.   There is an often cited quote: “…unless an organization sees that its task is to lead change, that organization—whether a business, a university, or a hospital—will not survive. In a period of rapid structural change the only organizations that survive are the ‘change leaders.’” —Peter ...
LogiGear Magazine – July 2011 – The Test Methods & Strategies Issue
One of the most common challenges faced by business leaders is the lack of visibility into QA activities. QA leaders have a tough time communicating the impact, value, and ROI of testing to the executives in a way that they can understand. Traditional reporting practices often fail to paint the full picture and do not ...
Regardless of the method you choose, simply spending some time thinking about good test design before writing the first test case will have a very high payback down the line, both in the quality and the efficiency of the tests. Test design is the single biggest contributor to success in software testing and its also ...
Introduction Software Testing 3.0 is a strategic end-to-end framework for change based upon a strategy to drive testing activities, tool selection, and people development that finally delivers on the promise of Software Testing. For more details on the evolution of Software Testing and Software Testing 3.0 see: The Early Evolution of Software Testing Software Testing ...
March Issue 2019: Leading the Charge with Better Test Methods
This article was developed from concepts in the book Global Software Test Automation: Discussion of Software Testing for Executives. Quality cost is the sum of all costs a company invests into the release of a quality product. When developing a software product, there are 4 types of quality costs: prevention costs, appraisal costs, internal failure ...
This article was originally featured in the July/August 2009 issue of Better Software magazine. Read the entire issue or become a subscriber. People often quote Lord Kelvin: “I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot express ...
This article was originally featured in the May/June 2009 issue of Better Software magazine. Read the entire issue or become a subscriber. In my travels, I’ve worked with a number of companies that have attempted to assess the quality of their testing — or worse, their testers — using poorly considered metrics. Sometimes the measurement ...
David S. Janzen – Associate Professor of Computer Science Department California Polytechnic State University, San Luis Obispo – homepage LogiGear: How did you get into software testing and what do you find interesting about it? Professor Janzen: The thing I enjoy most about computing is creating something that helps people. Since my first real job ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe