Test Design Focused on Expediting Functional Test Automation

Test organizations continue to undergo rapid transformation as demands grow for testing efficiencies. Functional test automation is often seen as a way to increase the overall efficiency of functional and system tests. How can a test organization stage itself for functional test automation before an investment in test automation has even been made? Further, how can you continue to harvest the returns from your test design paradigm once the test automation investment has been made? In this article we will discuss the factors in selecting a test design paradigm that expedites functional test automation. We will recommend a test design paradigm and illustrate how this could be applied to both commercial and open-source automation solutions. Finally, we will discuss how to leverage the appropriate test design paradigm once automation has been implemented in both an agile (adaptive) and waterfall (predictive) system development lifecycle (SDLC).

Test design – selection criteria

The test design selection criteria should be grounded in the fundamental goals of any functional automation initiative. Let us assume the selected test automaton tool shall enable end-users to author, maintain and execute automated test cases in a web-enabled, shareable environment. Furthermore, the test automation tool shall support test case design, automation and execution “best practices” as defined by the test organization. To harvest the maximum return from both test design and test automation the test design paradigm must support:

  • Manual test case design, execution and reporting
  • Automated test case design, execution and reporting
  • Data-driven manual and automated test cases
  • Reuse of test case “steps” or “components”
  • Efficient maintenance of manual and automated test cases

Test design – recommended paradigm

One paradigm that has been gaining momentum under several guises in the last few years is keyword-based test design. I have stated in previous articles that “The keyword concept is founded on the premise that the discrete functional business events that make up any application can be described using a short text description (keyword) and associated parameter value pairs (arguments). By designing keywords to describe discrete functional business events the testers begin to build up a common library of keywords that can be used to create keyword test cases. This is really a process of creating a language (keywords) to describe a sequence of events within the application (test case).”

The Keyword concept is not a silver bullet but it does present a design medium that leads to both effective test case design and ease of automation. Keywords present the opportunity to design test cases in a fashion that supports our previous test design selection criteria. It does not guarantee that these test cases will be effective but it certainly presents the greatest opportunity for success. Leveraging a test design paradigm that is modular and reusable paves the road for long term automation – not only that, it moves most of the maintenance to a higher level of abstraction: the keyword. The keyword name should be a shorthand description of what actions the keyword performs. The keyword name should begin with the action being performed followed by the functional entity followed by descriptive text (if required). Here are several common examples:

  • Logon User – Logon User
  • Enter Customer Name – Enter Customer Name
  • Enter Customer Address – Enter Customer Address
  • Validate Customer Name – Validate Customer Name
  • Select Customer Record – Select Customer Record

Test design – keyword application

Keyword test case design begins as an itemized list of the test cases to be constructed–usually as a set of named test cases. The internal structure of each test case is then constructed using existing (or new) keywords. Once the design is complete, the appropriate test data (input and results) can be added. Testing the keyword test case design involves executing the test case against the application or applications being tested.
At first glance this does not appear to be any different than any other method for test case design but there are significant differences between keyword test case design and any freehand / textual approach to test case design. Keyword test case designs are:

  • Consistent – the same keyword is used to describe the business event every time
  • Data Driven – the keyword contains the data required to perform the test step
  • Self Documenting – the keyword description contains the designers’ intent
  • Maintainable – with consistency comes maintainability
  • Automation — supports automation with little or no design transformation (rewrite)

Test design – adaption based on development/testing paradigm

There are two primary development and testing approaches being used by development organizations today: adaptive (agile) and predictive (waterfall/cascade). Both approaches certainly have their proponents–though the increasingly adaptive (agile) system development lifecycles are gaining precedence. The question becomes how does this affect the test design paradigm? The answer appears to be that it really does not affect the test design paradigm but it does affect the timing.

Predictive (waterfall/cascade) development lifecycles can be supported by a straight-forward design, build, execute and maintain test design paradigm that may later support automation. Eventually, one would expect the predictive testing team to design, build, execute, maintain and automate their test case inventory. This could be accomplished using both Tier 1 commercial automation tools and open source automation tools. As long as the automation tools support modular based design (functions) and data driven testing (test data sources) keyword-based automation can be supported–the most significant difference being the time and effort required to implement the testing framework. Adaptive (agile) development lifecycles come in several flavors–some support immediate keyword-based functional test design and automation while others do not. Agile test driven development (TDD) using FitNesse™, a testing framework which requires instrumentation by and collaboration with the development team, certainly supports keyword-based test case design and automation. Other agile paradigms only support instrumentation at the unit test level or not at all; i.e. a separate keyword-based test case design and automation toolset must be used. The challenge for non-TDD agile becomes designing, building, executing and maintaining functional tests within the context of a two to four week sprint. The solution is a combination of technique and timing. For the immediate changes in the current sprint consider using exploratory testers and an itemized list of test cases with little (if any) content–basically a high-level check list. Once the software for a sprint has migrated to and existed in production for at least one sprint, a traditional set of regression test cases can be constructed using keywords. This separates the challenge into sprint-related testing and regression testing.

David W. Johnson

David W. Johnson “DJ,” is a Senior Test Architect with over 25 years of experience in Information Technology across several business verticals, and has played key roles in business analysis, software design, software development, testing, disaster recovery and post implementation support. Over the past 20 years, he has developed specific expertise in testing and leading QA/Test team transformations — Delivered Test: Architectures, Strategies, Plans, Management, Functional Automation, Performance Automation, Mentoring Programs, and Organizational Assessments.

David Johnson
David W. Johnson “DJ,” is a Senior Test Architect with over 25 years of experience in Information Technology across several business verticals, and has played key roles in business analysis, software design, software development, testing, disaster recovery and post implementation support.

The Related Post

The key factors for success when executing your vision.   There is an often cited quote: “…unless an organization sees that its task is to lead change, that organization—whether a business, a university, or a hospital—will not survive. In a period of rapid structural change the only organizations that survive are the ‘change leaders.’” —Peter ...
Reducing the pester of duplications in bug reporting. Both software Developers and Testers need to be able to clearly identify any ‘Bug’, via the ‘Title’ used for the ‘Bug Report’.
The 12 Do’s and Don’ts of Test Automation When I started my career as a Software Tester a decade ago, Test Automation was viewed with some skepticism.
This article was developed from concepts in the book Global Software Test Automation: Discussion of Software Testing for Executives. Quality cost is the sum of all costs a company invests into the release of a quality product. When developing a software product, there are 4 types of quality costs: prevention costs, appraisal costs, internal failure ...
Trying to understand why fails, errors, or warnings occur in your automated tests can be quite frustrating. TestArchitect relieves this pain.  Debugging blindly can be tedious work—especially when your test tool does most of its work through the user interface (UI). Moreover, bugs can sometimes be hard to replicate when single-stepping through a test procedure. ...
March Issue 2020: Smarter Testing Strategies for The Modern SDLC
Most have probably heard the expression ‘less is more‘, or know of the ‘keep it simple and stupid‘ principle. These are general and well-accepted principles for design and architecture in general, and something that any software architect should aspire to. Similarly, Richard P. Gabriel (a major figure in the world of Lisp programming language, accomplished poet, and currently ...
I’ve been reviewing a lot of test plans recently. As I review them, I’ve compiled this list of things I look for in a well written test plan document. Here’s a brain dump of things I check for, in no particular order, of course, and it is by no means a complete list. That said, if you ...
At VISTACON 2011, Jane sat down with LogiGear Sr. VP, Michael Hackett, to discuss complex systems.
This article was originally featured in the July/August 2009 issue of Better Software magazine. Read the entire issue or become a subscriber. People often quote Lord Kelvin: “I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot express ...
Plan your Test Cases with these Seven Simple Steps What is a mind map? A mind map is a diagram used to visually organize information. It can be called a visual thinking tool. A mind map allows complex information to be presented in a simplified visual format. A mind map is created around a single ...
From cross-device testing, to regression testing, to load testing, to data-driven testing, check out the types of testing that are suitable for Test Automation. Scene: Interior QA Department. Engineering is preparing for a final product launch with a deadline that is 12 weeks away. In 6 weeks, there will be a 1 week quality gate, ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe