Elfriede Dustin Sharing Her Interest in Automated Testing

Elfriede Dustin of Innovative Defense Technology, is the author of various books including Automated Software Testing, Quality Web Systems, and her latest book Effective Software Testing. Dustin discusses her views on test design, scaling automation and the current state of test automation tools.

LogiGear: With Test Design being an important ingredient to successful test automation, we are curious at LogiGear magazine how you approach the problem? What methods do you employ to insure that your teams are developing tests that are ready or supported for automation?

Dustin: How we approach test design depends entirely on the testing problem we are solving. For example, we support the testing of mission critical systems using our (IDT’s) Automated Test and Re-Test (ATRT) solution. For some of the mission critical systems, Graphical User Interface (GUI) automated testing is important, because the system under test (SUT) has operator intensive GUI inputs that absolutely need to work – we approach the test design based on the GUI requirements, GUI scenarios, expected behavior and scenario based expected results.

Other mission critical systems we are testing have a need for interface testing, where hundreds or thousands of messages are being sent back and forth between the interfaces – and the message data being sent back and forth has to be simulated (here we have to test load, endurance and functionality). We might have to look at detailed software requirements for message values, data inputs and expected results and/or examine the SUT’s Software Design Documents (SDDs) for such data so we can ensure the test we are designing, or even the test that already existed, are sufficient to permit validation of those messages against the requirements. Finally, it’s of not much use if we now have automated the testing of thousands of messages, and we have added to the workload of the analysis. Therefore, the message data analysis needs to be automated also and again a different test design is required here. Many more test design approaches exist and need to be applied depending on the many more testing problems we are trying to solve, such as testing web services, databases, and others.

ATRT provides a modeling component for any type of test, i.e. GUI, message-based, combination of GUI and message based and more. This modeling component provides a more intuitive view of the test flow than mere text from a test plan or in a test procedure document. The model view lends itself to producing designs that are more likely to ensure the automated test meets the requirements the testers set out to validate and reduces rework or redesign once the test is automated.

LogiGear: What are your feelings on the different methods of test design from Keyword Test Design, Behavior Driven Test Development (Cucumber Tool) to scripted test automation of manual test case narratives?

Dustin: We view keyword-driven or behavior-driven tests as automated testing approaches; not necessarily test design approaches. We generally try to separate the nomenclatures of test design approaches from automated testing approaches. Our ATRT solution provides a keyword-driven approach, i.e. the tester (automator) selects the keyword action required via an ATRT-provided icon and the automation code is generated behind the scene. We find the keyword / action-driven approach to be most effective in helping test automators and in our case ATRT users build effective tests with ease and speed.

LogiGear: How have you dealt with the problems of scaling automation? Do you strive for a certain percentage of all tests to be automated? If so, what is that percentage and how often have you attained it?

Dustin: Again the answer to the question of scaling automation depends: are we talking scaling automated testing for message-based / background testing or scaling automation for GUI-based automated testing? Scaling for message based / background automated testing seems to be an easier problem to solve than scaling for GUI-based automated testing.

We don’t necessarily strive for a certain percentage of tests to be automated. We actually have a matrix of criteria that allows us to choose the tests that lend themselves to automation vs. those tests that don’t lend themselves to automation. We also build an automated test strategy before we propose automating any tests. The strategy begins with achieving an understanding of the test regimen and test objectives then prioritizes what to automate based on the likely return on investment. Although everything a human tester does can, with enough time and resources, be automated, not all tests should be automated. It has to make sense, there has to be some measurable return on the investment of automation before we recommend applying automation. “Will automating this increase the probability of finding defects?” is a question that’s rarely asked when it comes to automated testing. In our experience, there are ample tests that are ripe for achieving significant returns through automation so we are hardly hurting our business opportunities when we tell a customer that a certain test is not a good candidate for automation. Conversely, for those who have tried automation in their test programs and who claim their tests are 100% automated my first question is “What code coverage percentage does your 100% test automation achieve? One hundred percent test automation does not mean 100% code coverage or 100% functional coverage. One of the most immediate returns on using automation is the expansion of test coverage without increasing the amount of test time (and often while actually reducing test time), especially when it comes to automated backend / message testing.

LogiGear: What is your feeling on the current state of test automation tools? Do you feel that they are adequately addressing the needs of testers? Both those who can script and those who cannot?

Dustin: Most of the current crop of test automation tools focus on Windows or Web development. Most of the existing tools must be installed on the SUT thus modifying the SUT configuration. Most of the tools are GUI technology dependent. In our test environments we have to work with various Operating Systems in various states of development, using various types of programming languages and GUI technologies. Since none of the currently provided tools on the market met our vast criteria for an automated testing tool, we at IDT developed our own tool solution mentioned earlier, ATRT. ATRT is GUI technology neutral, doesn’t need to be installed on the SUT, is OS independent and uses keyword actions. No software development experience is required. ATRT provides the tester with every GUI action a mouse will provide in addition to various other keywords. Our experience with customers using ATRT for the first time is that it provides a very intuitive user interface and that it is easy to use. We also find, interestingly, that some of the best test automators with ATRT are younger testers who have grown up playing video games coincidentally developing eye-hand coordination. Some of the testers even find working with ATRT to be fun taking them away from the monotony of manually testing the same features over and over again with each new relase.

With ATRT we wanted to provide a solution where testers could model a test before the SUT becomes available and/or see a visual representation of their tests. ATRT provides a test automation modeling component.

We needed a solution that could run various tests concurrently (whether GUI tests or message based/backend/interface test) on multiple systems. ATRT has those features.

A tester should be able to focus on test design and new features. A tester should not have to spend time crafting elaborate automated testing scripts. Asking a tester to do such work places him/her at risk of losing focus on what he/she is trying to accomplish, i.e. verify the SUT behaves as expected.

LogiGear: In your career, what has been the single greatest challenge to getting test automation implemented effectively?

Dustin: Test automation can’t be treated as a side-activity or nice-to-have activity, i.e. whenever we have some time let’s automate a test. Automated testing needs to be an integral, strategically planned part of the software development lifecycle (that is a mini-lifecycle itself) and cannot be an afterthought. See our books on “Automated Software Testing” and “Implementing Automated Software Testing” on the automated testing lifecycle.

LogiGear: Finally, what is the greatest benefit that test automation provides to a software development team and/or the test team?

Dustin: Systems have become increasingly complex. We can’t test systems the same way we did 20 years ago. Automated software testing applied with a return-on-investment strategy built collaboratively with subject matter and testing experts, provides speed, agility and efficiencies to the software development team. When done right, no software or testing team can test anywhere near as much, so quickly, consistently and repeatedly as with automated testing.

 

LogiGear Corporation

LogiGear Corporation provides global solutions for software testing, and offers public and corporate software-testing training programs worldwide through LogiGear University. LogiGear is a leader in the integration of test automation, offshore resources and US project management for fast and cost-effective results. Since 1994, LogiGear has worked with hundreds of companies from the Fortune 500 to early-stage startups, creating unique solutions to exactly meet their needs. With facilities in the US and Vietnam, LogiGear helps companies double their test coverage and improve software quality while reducing testing time and cutting costs.

For more information, contact Joe Hughes + 01 650.572.1400

LogiGear Corporation
LogiGear Corporation provides global solutions for software testing, and offers public and corporate software testing training programs worldwide through LogiGear University. LogiGear is a leader in the integration of test automation, offshore resources and US project management for fast, cost-effective results. Since 1994, LogiGear has worked with Fortune 500 companies to early-stage start-ups in, creating unique solutions to meet their clients’ needs. With facilities in the US and Viet Nam, LogiGear helps companies double their test coverage and improve software quality while reducing testing time and cutting costs.

The Related Post

Automated Testing is a huge part of DevOps, but without human-performed quality assurance testing, you’re increasing the risk of  lower-quality software making it into production.  Automated Testing is an essential DevOps practice to increase organizations’ release cadence and code quality. But there are definitely limits to only using Automated Testing. Without human quality assurance (QA) ...
Introduction As a consultant and trainer, I am often asked by my clients and students how to deal with automated acceptance tests. One common question is whether automated acceptance tests should use the graphical user interface (GUI) provided by the application.
The guide for CUI Automated Testing strategies, including chatbot testing and voice app testing. In the Software Testing industry, trends come and go that shape the future of testing. From Automation in Agile, to the DevOps era we are now in, trends are what evolve and improve our testing processes and ideologies. Currently, many researchers ...
LogiGear Magazine January Trends Issue 2017
Mobile usage today is not just a trend but it is an essential shift in how people communicate with each other, interact with the world, and do business. According to a ComScore, in 2014 the number of mobile users surpassed the number of computer users and is showing strong growth over time, towards some point in ...
Looking for a solution to test your voice apps across devices and platforms? Whether you’re new or experienced in testing voice apps such as Alexa skill or Google Home actions, this article will give you a holistic view of the challenges of executing software testing for voice-based apps. It also explores some of the basic ...
This article was developed from concepts in the book Global Software Test Automation: Discussion of Software Testing for Executives. Introduction There are many potential pitfalls to Manual Software Testing, including: Manual Testing is slow and costly. Manual tests do not scale well. Manual Testing is not consistent or repeatable. Lack of training. Testing is difficult ...
I’ve been teaching a lot lately, was in India for one week, and I’m off to Seattle in two weeks to teach on performance topics. I thoroughly enjoy teaching, it allows me to stay sharp with current trends, and provides a nice break from the “implementation focus” that I generally have day to day.
This book isn’t for everyone, but everyone can get some value out of it. What I mean by that rather confusing statement is that folks working in Agile environments will likely want to throw the book across the room while folks in more bureaucratic environments like CMMI or other waterfall environments will likely get a ...
Two dominant manual testing approaches to the software testing game are scripted and exploratory testing. In the test automation space, we have other approaches. I look at three main contexts for test automation: 1. Code context – e.g. unit testing. 2. System context – e.g. protocol or message level testing. 3. Social context – e.g. ...
LogiGear Magazine – April 2013 – Test Automation
One of the basic challenges with test automation is adoption. I can’t tell you how many times I’ve cataloged licenses for a company and found out they already have many different automation software packages, none of which is being used. Traditionally I’ve been told that is because the tools don’t work and that the teams ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe