Continuous Iteration in Automation

I’ve been teaching a lot lately, was in India for one week, and I’m off to Seattle in two weeks to teach on performance topics. I thoroughly enjoy teaching, it allows me to stay sharp with current trends, and provides a nice break from the “implementation focus” that I generally have day to day.

One topic however that has been gnawing at me lately is the notion of “continuous iteration” in automation. With virtualization, my feeling is that you should be running the automation you create all the time. I’ve coined a new term lately, “Test Everything all the Time”.

I find that many automation teams shelve the automation until runs are needed. I’m finding this to be a real bone of contention when it comes to automation credibility. The automation should always be in a state of readiness, in that I can run all my tests all the time if I want to. We used to be plagued by machine constraints and “unstable” platforms, but I’ve found that these issues are becoming increasingly less, but we seem to be still holding on to old automation ideas.

I’ve setup our labs here with machines that are constantly running our clients’ automation, while the testers are always adding more tests to that bank of tests. Automation should free us up to do this, but I still see too many people attending automation tests, which to me is like watching paint dry.

We have a “readiness” dashboard on all of tests all the time, so if I choose to run a set of tests immediately, we can dispatch those tests to a bank of machines and run them. It makes my job as a quality director easy, in that I never ever ask if the “automation” is ready, I have that information at my fingertips, for all types of tests. I’ve attached a simple dashboard report that I’ve used since I started working on automation projects in the early 90’s.

Remember, your automation needs to be credible and usable, not one or the other.

7-Sep-09
Test Cases Created and in Development 25
Test Case Create Goal for Week 40
Goal Delta -15
Test Cases to be Certified 150
Test Cases Modified* (timing only) 5
TOTAL TEST CASES IN PRODUCTION 1253
Passive (Functional) Test Cases 631
Negative Test Cases 320
Boundary Test Cases 40
Work Flow Test Cases 78
User Story Based Test Cases 24
Miscellaneous Test Cases 160
Test Cases Ready for Automation 1065
Test Cases Ready for Automation % 85%
Test Cases NOT Ready for Automation % 15%
Test Cases NOT Ready for Automation 188
Test Case Error/Wrong 5
Application Error 23
Missing Application Functionality 59
Interface Changes (GUI) 2
Timing Problems 70
Techincal Limitations 6
Requests to Change Test(change control) 23
Sum Check 188
TOTAL MACHINE TIME REQUIRED TO RUN 23:45:00
TOTAL MACHINES AVAILABLE TO RUN TESTS 7
TOTAL SINGLE MACHINE RUN TIME 3:23:34

 

LogiGear Corporation

LogiGear Corporation LogiGear Corporation provides global solutions for software testing, and offers public and corporate software-testing training programs worldwide through LogiGear University. LogiGear is a leader in the integration of test automation, offshore resources and US project management for fast and cost-effective results. Since 1994, LogiGear has worked with hundreds of companies from the Fortune 500 to early-stage startups, creating unique solutions to exactly meet their needs. With facilities in the US and Vietnam, LogiGear helps companies double their test coverage and improve software quality while reducing testing time and cutting costs. For more information, contact Joe Hughes + 01 650.572.1400

LogiGear Corporation
LogiGear Corporation provides global solutions for software testing, and offers public and corporate software testing training programs worldwide through LogiGear University. LogiGear is a leader in the integration of test automation, offshore resources and US project management for fast, cost-effective results. Since 1994, LogiGear has worked with Fortune 500 companies to early-stage start-ups in, creating unique solutions to meet their clients’ needs. With facilities in the US and Viet Nam, LogiGear helps companies double their test coverage and improve software quality while reducing testing time and cutting costs.

The Related Post

Looking for a solution to test your voice apps across devices and platforms? Whether you’re new or experienced in testing voice apps such as Alexa skill or Google Home actions, this article will give you a holistic view of the challenges of executing software testing for voice-based apps. It also explores some of the basic ...
The path to continuous delivery leads through automation Software testing and verification needs a careful and diligent process of impersonating an end user, trying various usages and input scenarios, comparing and asserting expected behaviours. Directly, the words “careful and diligent” invoke the idea of letting a computer program do the job. Automating certain programmable aspects ...
LogiGear Magazine September Issue 2020: Testing Transformations: Modernizing QA in the SDLC
The success of Automation is often correlated to its ROI. Here are 5 KPIs that we find universally applicable when it comes to quanitfying your Test Automation.
Framework: An abstraction in which software providing generic functionality can be selectively changed by additional user written code, thus providing application specific software. A software framework is a universal, reusable software platform used to develop applications, products and solutions. Harness: A collection of software and test data configured to test a program unit by running it under varying conditions and monitoring ...
Utility: A program that performs a specific task related to the management of computer functions, resources, or files, as password protection, memory management, virus protection, and file compression. Tool: A program or application that software development teams use to create, debug, maintain, or otherwise support other programs and applications. The term usually refers to programs that can be combined together ...
Introduction A common issue that I come across in projects is the relationship between test automation and programming. In this article I want to highlight some of the differences that I feel exist between the two.
What is the Automation ROI ticker? The LogiGear Automation Return on Investment (ROI) ticker, the set of colored numbers that you see above the page, shows how much money we presumably save our customers over time by employing test automation as compared to doing those same tests manually, both at the design and execution level.
LogiGear Magazine – March 2011 – The Agile Test Automation Issue
One of the basic challenges with test automation is adoption. I can’t tell you how many times I’ve cataloged licenses for a company and found out they already have many different automation software packages, none of which is being used. Traditionally I’ve been told that is because the tools don’t work and that the teams ...
This article was developed from concepts in the book Global Software Test Automation: A Discussion of Software Testing for Executives, by Hung Q. Nguyen, Michael Hacket and Brent K. Whitlock Introduction The top 5 pitfalls encountered by managers employing software Test Automation are: Uncertainty and lack of control Poor scalability and maintainability Low Test Automation ...
Two dominant manual testing approaches to the software testing game are scripted and exploratory testing. In the test automation space, we have other approaches. I look at three main contexts for test automation: 1. Code context – e.g. unit testing. 2. System context – e.g. protocol or message level testing. 3. Social context – e.g. ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe