How to Stop “Flying Blind” When Accounting for QA

One of the most common challenges faced by business leaders is the lack of visibility into QA activities. QA leaders have a tough time communicating the impact, value, and ROI of testing to the executives in a way that they can understand. Traditional reporting practices often fail to paint the full picture and do not demonstrate the value that the QA Team provides to the business. Business leaders cannot connect the QA metrics with business risks and objectives. The information in the executive reports does not provide enough detail to guide decision-making. This may lead them to feel they are “Flying Blind” when accounting for QA. They are no longer satisfied by looking at a few charts and tables, they look for insights that drive performance and effective decision making. In this article, we will discuss what business leaders really need from you to execute informed decisions and carry out effective risk management.

Differences in Perspective: Engineer vs. QA Leader

Test Engineers measure the success of particular testing programs or initiatives they are a part of rather than the overall organizational objectives. Test Engineers and QA leaders develop and analyze metrics from 2 different perspectives. The basic difference between the 2 is: QA leaders are interested in the metrics that reflect the overall project or organization that can be used to drive strategy, and Test Engineers are interested to develop and present the metrics that are mostly focused on their testing activities. Though both views are different, they are equally important. 

QA leaders act as a bridge between Testers and business. They receive input from different teams and people with different roles, which uniquely positions them to look at the bigger picture and reflect on overall quality within the organization. When QA leaders change their viewpoint and look at things from a Tester’s perspective, it helps them justify decisions and budget. The microscopic metrics from Testers enable QA leaders to deeply gauge how every team is performing and plan for required resources and improvements. 

Test Engineers generate reports from multiple tools. It becomes difficult for QA leaders to perform analysis based on multiple tool reports, that are spread across test case management tools, ALM tools, Automation tools, and dashboards. QA leaders can miss important information, while hopping from one report to another. QA leaders can eliminate multiple reports, which do not contribute to decision-making by creating dashboards covering the metrics that really matter. 

Grab the Attention of Your Business Leaders and Make Them Read Your Reports

It’s important that QA leaders confer with business leaders before choosing which metrics to track and report. Business leaders are busy all the time. Their attention spans decrease exponentially with their height on the corporate ladder. Reports make a difference only when your leaders read and understand them. Furthermore, if the metrics being tracked are not actively speaking to items that the business leaders deem important, they’ll fail to make an impact on the business leaders. Additionally, creating test reports is time-intensive, tedious, and sometimes may delay the actual testing as well. QA leaders may have to collect data from multiple sources and stitch them together to make sense out of it. Here is how you make your leaders read your reports in an efficient and effective way:

  • Write effective executive summaries

Business leaders are always busy. It is important to tailor your summary for your audience. Business leaders read the summary to determine if it’s worth their time to read the full report. The executive summary is the first thing—and maybe the only thing—that your executives will read to get the essence of your report without drilling down into finer details. So take time to write an effective executive summary that is short, precise, and summarizes your report in the business context. 

  • Keep your reports Lean and to the point

An executive QA report should contain only the key information that the business leaders want to know. QA leaders can sometimes get carried away providing sophisticated details about the QA activities, which can overload the report and make it difficult for the leaders to grasp all the details. Focus on agreed-upon KPIs and keep your reports simple.

Be clear on: 

  1. Why are you writing this QA report? 
  2. What information do your business leaders need to know? 
  3. How this information will benefit/contribute to organizational goals?
  • Tell a good story instead of just a few charts and data points

Metrics are irrelevant if they are taken out of context. The objective of creating a QA report is to shed light on a challenge, educate your stakeholders on something, or influence their decisions. But, often these QA reports have little to no narrative—which can make the executives tune out pretty quickly—despite having some great insights! Have a big picture of what you want to convey with your report. Then, be creative to grab the attention of your leaders by presenting them with effective visualizations. Ensure that you deliver the reports to the right people, at the right time, with the right information.

  • Be honest and avoid bias while reporting

Do not shy away from reporting the KPIs/metrics that are trending down or showing negative results. When drafting a report, QA leaders may worry that negative trends can hurt their credibility in the organization. Depicting an overly optimistic story about your testing is not going to help you. Business leaders trust the people who are honest and brave enough to present the negatives and their learnings from them to improve in the future.

Though most of us consider ourselves as rational beings, the truth is we are all biased. We often fall into the bias trap unconsciously, making us lack objectivity. We cannot avoid biases but we can definitely manage them. Try to be as objective as possible while drafting your reports. Facilitate an objective point of view by getting your reports tested by peers with different perspectives.

What’s Needed for Decision-Makers to Make QA Decisions? Which Metrics Actually Matter?

Decision-makers need to know how your testing and Automation efforts are impacting the organization. QA leaders should present how testing contributes to business goals. A QA report with lots of statistics—but without any actionable insights—is not that valuable. Several quality metrics that QA leaders report are only useful from a testing context, but they do not translate into business benefits. Here are a few metrics that can measure the value add from testing and Test Automation: 

1. Metrics Around Customer Bugs 

One of the objectives of testing efforts is to ensure the release of high-quality software that meets the expectations of the end-users. Tracking the number of issues reported by customers allows you to measure the effectiveness of your testing and development processes. 

  • Defect Leakage

Defect leakage is a metric that measures the percentage of defects that leaked from the testing stages to production. This is one of the useful metrics that you can use to measure the effectiveness of your testing.

Defect Leakage (%) = Number of defects found by customers / (Number of valid bugs found during testing) * 100
  • Defect Removal Efficiency (DRE) 

Defect removal efficiency (DRE) is the correlation of bugs found by Testers in testing stages with the number of bugs that were detected by customers in production. DRE is the Testing Team’s ability to find and remove bugs before production. 

DRE (%) = Number of defects found and eliminated during testing / Number of bugs found during testing + Number of defects post-release *100

Define service-level agreement (SLA) for defect leakage and defect removal efficiency. For example: 

Defect leakage (DL) should be lesser than 5% or Defect Removal Efficiency (DRE%) should be 95% or greater.

Business leaders strive to keep their customers happy and reduce churn. The lesser the DRE%, the higher is the defect leakage and customer dissatisfaction. Decreasing DRE% is a sign that the Development and QA Team are lacking adequate defect removal activities before the release. QA Directors can justify investing in QA (processes, people, tools, and training) to the business in order to increase the DRE%. 

DRE in combination with other effective metrics can help QA Directors to identify gaps in development and testing processes, practices that are leading to poor quality releases. As a QA Director, you should actively monitor and report the defined SLA for defect leakage and defect removal efficiency to your business leaders. Also, narrate your plan on increasing defect removal efficiency and decreasing defect leakage in your reports. 

2. Time to Test (TTT)

Some organizations overlook time efficiency metrics. Rolling out the first-to-market features gives a competitive advantage to the business and is critical to capture market shares and profits. Time To Test (TTT) is the measure of time required to complete testing and proceed to the next stages of development. This metric can help us measure the effectiveness and efficiency of testing. Business leaders can get insights into how QA has contributed to reducing/increasing the time-to-market. This metric will help business leaders to actively look into process improvements of the QA Team and if they are helping to reduce time-to-market.

3. Cost of Defects

Does Software Testing save costs? If yes, then quantify it.

This metric tells you how much each defect costs you. For example, a bug in production goes through different people before it gets fixed. The longer the life cycle, the higher is the cost. Here is how you measure the cost of a defect: 

Development ActivityRate/Hr
(in USD)
Time LostCost
(in USD)
A Support Engineer trying to troubleshoot the issue/trying to reproduce it.$331 hour$33
Engineering Manager evaluating the issue and assigning it to the right Developer.$640.5 hour$32
Developer debugging the issue and finding the root cause.$503 hours$150
Developer fixing and testing the issue.$501 hour$50
Tester retesting the issue.$301 hour$30
Ops work on releasing the patch.$601 hour$60
Customer communication post-bug fix from customer representatives.$300.5 hour$15
Total 8 hours$370 is the cost of a defect

It’s important to note that these are not set-in-stone values, but rather are some standard going rates. These may be different for your organization, but I wanted to provide an example. A simple way to find the cost per defect is to evaluate one month’s production bugs and the time needed to debug, fix, retest, and release them. Calculate the corresponding costs associated with different roles involved in the process.

As we all know that the later bugs are discovered, the more expensive they are to fix. For example, if the same bug was found earlier while coding, the Developer could easily fix it with minimal time, cost, and dependencies:

Development ActivityRate/Hr
(in USD)
Time LostCost
(in USD)
Developer finding and fixing the issue at the code level.$330.2 hour$6.60 is the cost of defect

You can also calculate the cost of defects for different levels of Software Testing. Cost of defects with respect to the number of defects can be used as a basis to figure out how much costs the testing is saving for the company. 

4. ROI of Test Automation

QA leaders need to carefully choose the metrics to assess if your organization is getting an acceptable value by investing in Test Automation. Ask these important questions to yourself that will help you measure and justify the value you get from the investment in Test Automation. Then, derive metrics that can answer these questions.

  • How many test cases can be automated and how many have we automated? 
    • Percentage of Automatable Test Cases:  To analyze what percentage of Automation can be achieved in your software.
Percentage of Automatable Test Cases = Number of test cases that can be automated / Number of total test cases * 100
  • Automation Progress:  To analyze your progress towards your Automation goal. 
Automation Progress = Number of actual test cases automated / Number of test cases that are automatable
  • Test Coverage: To analyze what percentage of your codebase is exercised by your tests.
Test Coverage (%) = AC/C
AC = Automation coverage
C = Total coverage
  • How much time are you saving with Test Automation?
    • Time Saved: Time saved is cost saved. This metric helps you to analyze if your investment in Test Automation is saving time and accelerating your testing.
T.S. = ME-(AE+F)
T.S. = Time Saved
ME = Time required for Manual Execution
AE = Time required for Automated Execution
F= Fragility (time needed to maintain and update scripts)
  • How much effort would it take to execute the same tests manually without Automation?
    • Equivalent Manual Test Effort (EMTE): To determine the efforts saved by Test Automation (Dorothy Graham, 2010).
    • If an automated test takes 3 hours to be run manually, then the EMTE is 3 hours. In a sprint, if we run this automated test 3 times, then EMTE for that sprint is 3*3 = 9 hours of EMTE.
  • How many risks are we reducing by Test Automation?
    • Test Case Effectiveness (Automation): To analyze the quality of the automated test cases and their ability to find issues.
Test Case Effectiveness = (Number of defects detected / Number of automated test cases run) x 100

There are no universal metrics that work for everyone. As a QA leader, you need to choose the right metrics that will guide you to determine your team’s successes and deficiencies transparently. The chosen metrics should provide insights into the effectiveness of the current testing methods. Often, QA goals are not aligned to organizational goals, making testing reactive rather than proactive. As a QA Director, your job is to tie your QA goals to the business goals. ‘Industry best practices’ mean nothing if they aren’t aligned with what your organization is trying to accomplish. Share your organizational vision with your team and make them understand how their daily tasks directly impact the success of the organization. Then, coach your team to develop, track, and achieve their shared goals.

Real-World Example

LogiGear has seen many of our customers struggle with this challenge. One customer program, in particular, was an enterprise software suite in the energy space. The Senior QA Manager was seeing numerous issues: 

  • An accumulation of test design and Automation technical debt building up at the end of each release cycle 
  • Reduced in-sprint automated tests and a backlog of test cases and associated automated test scripts that needed maintenance updates 
  • Missed defects 
  • Schedule slippage 

The Manual Test Team was having to manually execute test cases, where the automated script needed to be updated and couldn’t be executed, or failed with “known issue” (known build change); but, that work wasn’t planned for and couldn’t be completed by the Automation Team in time for the next regression run or daily smoke test run. This unplanned workload on the QA Team was eating into and degrading Automation ROI. The LogiGear QA Team worked closely with the Sr. QA Manager and her staff to analyze the various Project Team’s working processes, data being reported, and current metrics being tracked.

Scrum Teams, QA Leads, and Manual and Automation Test Engineers were doing a certain level of release cycle planning/estimation, and providing that input data to the Sr. QA Manager. This included things such as: 

  • Estimating the number of new manual test cases planned
  • Number of missing manual test cases in the backlog from the previous release cycle (missing test coverage) that needed to be designed
  • The number of manual test cases in the backlog from last release cycle that needed to be automated
  • Estimated number of test cases that could be designed and then automated incoming release cycle based on average test case T-Shirt sizing (Small, Medium, Large) 

Part of the problem was that the T-Shirt sizing was too general and didn’t account for X-Large, or even XX-Large test cases. Additionally, some test designs had Automation complexity factors around verification points, involving 3D image comparison and other complex data requirements or technical complexities. This increased the actual time spent working with the Manual/Domain Tester or other project stakeholders, as they needed to first understand the test objectives, then the pass/fail criteria of each verification point in the test design, then spend time automating the test case steps and verification points, and finally work with stakeholders to verify that the results were correct. It also took additional support time during the current iteration, which slowed down the completion of assigned tasks.

The result was that iteration by iteration, tasks were delayed and/or not completed while spilling over into the next sprint, the backlog of technical debt was growing which caused both manual and automated test coverage gaps, defects were missed (or defects found later in the cycle that could have been identified sooner), and release milestones slipped.

LogiGear created a more granular and detailed T-Shirt sizing, including additional X-Large and XX-Large sizes with better-defined sizing criteria for all sizes. LogiGear recommended a process modification to facilitate more focused iteration planning, estimation and data creation, reporting/tracking, and metrics. The company includes this as they transitioned to a shorter, 8 week LRP (Long Range Planning) schedule and one week sprints.

On LogiGear’s recommendation, the Project Teams first prioritized and then broke down the backlog of test cases: (a) those test cases needing to be automated, and (b) those test cases that were identified as failed due to valid build change, into 4 separate tasks for each test case:

  1. Manual test case design/stakeholder review/signoff
  2. One-to-one between Manual/Domain Engineer and Automation Engineer
  3. Test Case Automation with stakeholder review/signoff
  4. Test script stabilization, across all required platforms

As a result of making these process, data capture, analysis, tracking, and reporting metrics changes, the Sr. QA Manager was able to begin to account for and track the actual “time to design,” to maintain, and to execute Automation frameworks, including iteration post mortem lessons learned––to continually improve the process as the technical backlog was being worked down at a rate targeted and approved by management while greatly reducing any new additions to the backlog. Another huge benefit was that the data and metrics were now available and being tracked to actually account for QA and allowed for plugging in their own numbers, which were also showing improved velocity productivity and 8-week LRP actuals trending closer and closer to LRP estimates. 

Summary

Executive-level reporting is the primary source of business intelligence that business leaders rely on to make more accurate, data-driven decisions that will help them to remain competitive in today’s market. QA reporting may sometimes overwhelm or confuse the leaders with no insights. In the Agile/SAFe/DevOps era, business leaders are looking for metrics that can indicate the business value of Software Testing and assess the value derived from their investment in Software Testing. In fact, the SAFe framework clearly defines the importance of creating value for both internal and external customers. This value stream must be quantified through metrics.

Test Automation needs significant time to design, execute, and maintain. Several organizations do not track the “time to design”, maintain, and execute tests—nor the cost per hour of those efforts. To stop “Flying Blind” when accounting for QA, you should ensure that track project and feature expansion year-over-year and a quarter on quarter-over-quarter to see what your project demand increases look like. Even though QA cannot scale 1:1 with Development, QA can hedge IT expansion with offshoring, cloud-based Automation tools, and infrastructure, as well as ensure right size resourcing, tools, infrastructure, and training are accounted for. Thus, when presenting QA metrics to business stakeholders, it’s important that you choose and present the right metrics that give transparency to your team’s work and accomplishments.

Clayton Simmons
Clayton Simmons has 10+ years in Enterprise Services Business. Clayton ran Cognizant’s Digital Assurance Practice in 2012 and lead an organization focused solely on testing digital enterprise digital solutions for both Mobile and IoT. He successfully rolled out Customer Experience (CX) Test offering which revolutionized the approach to the perceived quality quotient against traditional functional validation of so many other testing services at that time.

The Related Post

Are you looking for the best books on software testing methods? Here are 4 books that should be on your reading list! The Way of the Web Tester: A Beginner’s Guide to Automating Tests By Jonathan Rasmusson Whether you’re a traditional software tester, a developer, or a team lead, this is the book for you! It ...
LogiGear Magazine March Issue 2021: Metrics & Measurements: LogiGear’s Guide to QA Reporting and ROI
Internet-based per-use service models are turning things upside down in the software development industry, prompting rapid expansion in the development of some products and measurable reduction in others. (Gartner, August 2008) This global transition toward computing “in the Cloud” introduces a whole new level of challenge when it comes to software testing.
The ownership of quality has evolved, don’t get left behind Welcome to our new feature in LogiGear Magazine! We will be doing a column in each issue on current topics and how to manage, deal with, and support your team through them. This first installment of Leader’s Pulse is about making the move to DevOps. This ...
Do managers even matter? That’s a good question.  This is the question some MBA students asked for a research project at Google. They called the research, data gathered, findings, implementation, and follow-up “Project Oxygen.” It focused on gathering data regarding what qualities staff perceived would make a “good manager.” The results were not surprising. But, ...
Introduction Keyword-driven methodologies like Action Based Testing (ABT) are usually considered to be an Automation technique. They are commonly positioned as an advanced and practical alternative to other techniques like to “record & playback” or “scripting”.
I’ve been reviewing a lot of test plans recently. As I review them, I’ve compiled this list of things I look for in a well written test plan document. Here’s a brain dump of things I check for, in no particular order, of course, and it is by no means a complete list. That said, if you ...
Has this ever happened to you: You’ve been testing for a while, perhaps building off of a branch, only to find out that, after all of this time, there is something big wrong. It’s a bad build and now you have to go backwards, fix something, and get a new build. Basically, you just wasted ...
The key factors for success when executing your vision.   There is an often cited quote: “…unless an organization sees that its task is to lead change, that organization—whether a business, a university, or a hospital—will not survive. In a period of rapid structural change the only organizations that survive are the ‘change leaders.’” —Peter ...
Michael Hackett   Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ...
At VISTACON 2011, Jane sat down with LogiGear Sr. VP, Michael Hackett, to discuss complex systems.
“Combinatorial testing can detect hard-to-find software faults more efficiently than manual test case selection methods.” Developers of large data-intensive software often notice an interesting—though not surprising—phenomenon: When usage of an application jumps dramatically, components that have operated for months without trouble suddenly develop previously undetected errors. For example, newly added customers may have account records ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe