One of the most common challenges faced by business leaders is the lack of visibility into QA activities. QA leaders have a tough time communicating the impact, value, and ROI of testing to the executives in a way that they can understand. Traditional reporting practices often fail to paint the full picture and do not demonstrate the value that the QA Team provides to the business. Business leaders cannot connect the QA metrics with business risks and objectives. The information in the executive reports does not provide enough detail to guide decision-making. This may lead them to feel they are “Flying Blind” when accounting for QA. They are no longer satisfied by looking at a few charts and tables, they look for insights that drive performance and effective decision making. In this article, we will discuss what business leaders really need from you to execute informed decisions and carry out effective risk management.
Differences in Perspective: Engineer vs. QA Leader
Test Engineers measure the success of particular testing programs or initiatives they are a part of rather than the overall organizational objectives. Test Engineers and QA leaders develop and analyze metrics from 2 different perspectives. The basic difference between the 2 is: QA leaders are interested in the metrics that reflect the overall project or organization that can be used to drive strategy, and Test Engineers are interested to develop and present the metrics that are mostly focused on their testing activities. Though both views are different, they are equally important.
QA leaders act as a bridge between Testers and business. They receive input from different teams and people with different roles, which uniquely positions them to look at the bigger picture and reflect on overall quality within the organization. When QA leaders change their viewpoint and look at things from a Tester’s perspective, it helps them justify decisions and budget. The microscopic metrics from Testers enable QA leaders to deeply gauge how every team is performing and plan for required resources and improvements.
Test Engineers generate reports from multiple tools. It becomes difficult for QA leaders to perform analysis based on multiple tool reports, that are spread across test case management tools, ALM tools, Automation tools, and dashboards. QA leaders can miss important information, while hopping from one report to another. QA leaders can eliminate multiple reports, which do not contribute to decision-making by creating dashboards covering the metrics that really matter.
Grab the Attention of Your Business Leaders and Make Them Read Your Reports
It’s important that QA leaders confer with business leaders before choosing which metrics to track and report. Business leaders are busy all the time. Their attention spans decrease exponentially with their height on the corporate ladder. Reports make a difference only when your leaders read and understand them. Furthermore, if the metrics being tracked are not actively speaking to items that the business leaders deem important, they’ll fail to make an impact on the business leaders. Additionally, creating test reports is time-intensive, tedious, and sometimes may delay the actual testing as well. QA leaders may have to collect data from multiple sources and stitch them together to make sense out of it. Here is how you make your leaders read your reports in an efficient and effective way:
- Write effective executive summaries
Business leaders are always busy. It is important to tailor your summary for your audience. Business leaders read the summary to determine if it’s worth their time to read the full report. The executive summary is the first thing—and maybe the only thing—that your executives will read to get the essence of your report without drilling down into finer details. So take time to write an effective executive summary that is short, precise, and summarizes your report in the business context.
- Keep your reports Lean and to the point
An executive QA report should contain only the key information that the business leaders want to know. QA leaders can sometimes get carried away providing sophisticated details about the QA activities, which can overload the report and make it difficult for the leaders to grasp all the details. Focus on agreed-upon KPIs and keep your reports simple.
Be clear on:
- Why are you writing this QA report?
- What information do your business leaders need to know?
- How this information will benefit/contribute to organizational goals?
- Tell a good story instead of just a few charts and data points
Metrics are irrelevant if they are taken out of context. The objective of creating a QA report is to shed light on a challenge, educate your stakeholders on something, or influence their decisions. But, often these QA reports have little to no narrative—which can make the executives tune out pretty quickly—despite having some great insights! Have a big picture of what you want to convey with your report. Then, be creative to grab the attention of your leaders by presenting them with effective visualizations. Ensure that you deliver the reports to the right people, at the right time, with the right information.
- Be honest and avoid bias while reporting
Do not shy away from reporting the KPIs/metrics that are trending down or showing negative results. When drafting a report, QA leaders may worry that negative trends can hurt their credibility in the organization. Depicting an overly optimistic story about your testing is not going to help you. Business leaders trust the people who are honest and brave enough to present the negatives and their learnings from them to improve in the future.
Though most of us consider ourselves as rational beings, the truth is we are all biased. We often fall into the bias trap unconsciously, making us lack objectivity. We cannot avoid biases but we can definitely manage them. Try to be as objective as possible while drafting your reports. Facilitate an objective point of view by getting your reports tested by peers with different perspectives.
What’s Needed for Decision-Makers to Make QA Decisions? Which Metrics Actually Matter?
Decision-makers need to know how your testing and Automation efforts are impacting the organization. QA leaders should present how testing contributes to business goals. A QA report with lots of statistics—but without any actionable insights—is not that valuable. Several quality metrics that QA leaders report are only useful from a testing context, but they do not translate into business benefits. Here are a few metrics that can measure the value add from testing and Test Automation:
1. Metrics Around Customer Bugs
One of the objectives of testing efforts is to ensure the release of high-quality software that meets the expectations of the end-users. Tracking the number of issues reported by customers allows you to measure the effectiveness of your testing and development processes.
- Defect Leakage
Defect leakage is a metric that measures the percentage of defects that leaked from the testing stages to production. This is one of the useful metrics that you can use to measure the effectiveness of your testing.
Defect Leakage (%) = Number of defects found by customers / (Number of valid bugs found during testing) * 100 |
- Defect Removal Efficiency (DRE)
Defect removal efficiency (DRE) is the correlation of bugs found by Testers in testing stages with the number of bugs that were detected by customers in production. DRE is the Testing Team’s ability to find and remove bugs before production.
DRE (%) = Number of defects found and eliminated during testing / Number of bugs found during testing + Number of defects post-release *100 |
Define service-level agreement (SLA) for defect leakage and defect removal efficiency. For example:
Defect leakage (DL) should be lesser than 5% or Defect Removal Efficiency (DRE%) should be 95% or greater.
Business leaders strive to keep their customers happy and reduce churn. The lesser the DRE%, the higher is the defect leakage and customer dissatisfaction. Decreasing DRE% is a sign that the Development and QA Team are lacking adequate defect removal activities before the release. QA Directors can justify investing in QA (processes, people, tools, and training) to the business in order to increase the DRE%.
DRE in combination with other effective metrics can help QA Directors to identify gaps in development and testing processes, practices that are leading to poor quality releases. As a QA Director, you should actively monitor and report the defined SLA for defect leakage and defect removal efficiency to your business leaders. Also, narrate your plan on increasing defect removal efficiency and decreasing defect leakage in your reports.
2. Time to Test (TTT)
Some organizations overlook time efficiency metrics. Rolling out the first-to-market features gives a competitive advantage to the business and is critical to capture market shares and profits. Time To Test (TTT) is the measure of time required to complete testing and proceed to the next stages of development. This metric can help us measure the effectiveness and efficiency of testing. Business leaders can get insights into how QA has contributed to reducing/increasing the time-to-market. This metric will help business leaders to actively look into process improvements of the QA Team and if they are helping to reduce time-to-market.
3. Cost of Defects
Does Software Testing save costs? If yes, then quantify it.
This metric tells you how much each defect costs you. For example, a bug in production goes through different people before it gets fixed. The longer the life cycle, the higher is the cost. Here is how you measure the cost of a defect:
Development Activity | Rate/Hr (in USD) | Time Lost | Cost (in USD) |
---|---|---|---|
A Support Engineer trying to troubleshoot the issue/trying to reproduce it. | $33 | 1 hour | $33 |
Engineering Manager evaluating the issue and assigning it to the right Developer. | $64 | 0.5 hour | $32 |
Developer debugging the issue and finding the root cause. | $50 | 3 hours | $150 |
Developer fixing and testing the issue. | $50 | 1 hour | $50 |
Tester retesting the issue. | $30 | 1 hour | $30 |
Ops work on releasing the patch. | $60 | 1 hour | $60 |
Customer communication post-bug fix from customer representatives. | $30 | 0.5 hour | $15 |
Total | 8 hours | $370 is the cost of a defect |
It’s important to note that these are not set-in-stone values, but rather are some standard going rates. These may be different for your organization, but I wanted to provide an example. A simple way to find the cost per defect is to evaluate one month’s production bugs and the time needed to debug, fix, retest, and release them. Calculate the corresponding costs associated with different roles involved in the process.
As we all know that the later bugs are discovered, the more expensive they are to fix. For example, if the same bug was found earlier while coding, the Developer could easily fix it with minimal time, cost, and dependencies:
Development Activity | Rate/Hr (in USD) | Time Lost | Cost (in USD) |
---|---|---|---|
Developer finding and fixing the issue at the code level. | $33 | 0.2 hour | $6.60 is the cost of defect |
You can also calculate the cost of defects for different levels of Software Testing. Cost of defects with respect to the number of defects can be used as a basis to figure out how much costs the testing is saving for the company.
4. ROI of Test Automation
QA leaders need to carefully choose the metrics to assess if your organization is getting an acceptable value by investing in Test Automation. Ask these important questions to yourself that will help you measure and justify the value you get from the investment in Test Automation. Then, derive metrics that can answer these questions.
- How many test cases can be automated and how many have we automated?
- Percentage of Automatable Test Cases: To analyze what percentage of Automation can be achieved in your software.
Percentage of Automatable Test Cases = Number of test cases that can be automated / Number of total test cases * 100 |
- Automation Progress: To analyze your progress towards your Automation goal.
Automation Progress = Number of actual test cases automated / Number of test cases that are automatable |
- Test Coverage: To analyze what percentage of your codebase is exercised by your tests.
Test Coverage (%) = AC/C AC = Automation coverage C = Total coverage |
- How much time are you saving with Test Automation?
- Time Saved: Time saved is cost saved. This metric helps you to analyze if your investment in Test Automation is saving time and accelerating your testing.
T.S. = ME-(AE+F) T.S. = Time Saved ME = Time required for Manual Execution AE = Time required for Automated Execution F= Fragility (time needed to maintain and update scripts) |
- How much effort would it take to execute the same tests manually without Automation?
- Equivalent Manual Test Effort (EMTE): To determine the efforts saved by Test Automation (Dorothy Graham, 2010).
- If an automated test takes 3 hours to be run manually, then the EMTE is 3 hours. In a sprint, if we run this automated test 3 times, then EMTE for that sprint is 3*3 = 9 hours of EMTE.
- How many risks are we reducing by Test Automation?
- Test Case Effectiveness (Automation): To analyze the quality of the automated test cases and their ability to find issues.
Test Case Effectiveness = (Number of defects detected / Number of automated test cases run) x 100 |
There are no universal metrics that work for everyone. As a QA leader, you need to choose the right metrics that will guide you to determine your team’s successes and deficiencies transparently. The chosen metrics should provide insights into the effectiveness of the current testing methods. Often, QA goals are not aligned to organizational goals, making testing reactive rather than proactive. As a QA Director, your job is to tie your QA goals to the business goals. ‘Industry best practices’ mean nothing if they aren’t aligned with what your organization is trying to accomplish. Share your organizational vision with your team and make them understand how their daily tasks directly impact the success of the organization. Then, coach your team to develop, track, and achieve their shared goals.
Real-World Example
LogiGear has seen many of our customers struggle with this challenge. One customer program, in particular, was an enterprise software suite in the energy space. The Senior QA Manager was seeing numerous issues:
- An accumulation of test design and Automation technical debt building up at the end of each release cycle
- Reduced in-sprint automated tests and a backlog of test cases and associated automated test scripts that needed maintenance updates
- Missed defects
- Schedule slippage
The Manual Test Team was having to manually execute test cases, where the automated script needed to be updated and couldn’t be executed, or failed with “known issue” (known build change); but, that work wasn’t planned for and couldn’t be completed by the Automation Team in time for the next regression run or daily smoke test run. This unplanned workload on the QA Team was eating into and degrading Automation ROI. The LogiGear QA Team worked closely with the Sr. QA Manager and her staff to analyze the various Project Team’s working processes, data being reported, and current metrics being tracked.
Scrum Teams, QA Leads, and Manual and Automation Test Engineers were doing a certain level of release cycle planning/estimation, and providing that input data to the Sr. QA Manager. This included things such as:
- Estimating the number of new manual test cases planned
- Number of missing manual test cases in the backlog from the previous release cycle (missing test coverage) that needed to be designed
- The number of manual test cases in the backlog from last release cycle that needed to be automated
- Estimated number of test cases that could be designed and then automated incoming release cycle based on average test case T-Shirt sizing (Small, Medium, Large)
Part of the problem was that the T-Shirt sizing was too general and didn’t account for X-Large, or even XX-Large test cases. Additionally, some test designs had Automation complexity factors around verification points, involving 3D image comparison and other complex data requirements or technical complexities. This increased the actual time spent working with the Manual/Domain Tester or other project stakeholders, as they needed to first understand the test objectives, then the pass/fail criteria of each verification point in the test design, then spend time automating the test case steps and verification points, and finally work with stakeholders to verify that the results were correct. It also took additional support time during the current iteration, which slowed down the completion of assigned tasks.
The result was that iteration by iteration, tasks were delayed and/or not completed while spilling over into the next sprint, the backlog of technical debt was growing which caused both manual and automated test coverage gaps, defects were missed (or defects found later in the cycle that could have been identified sooner), and release milestones slipped.
LogiGear created a more granular and detailed T-Shirt sizing, including additional X-Large and XX-Large sizes with better-defined sizing criteria for all sizes. LogiGear recommended a process modification to facilitate more focused iteration planning, estimation and data creation, reporting/tracking, and metrics. The company includes this as they transitioned to a shorter, 8 week LRP (Long Range Planning) schedule and one week sprints.
On LogiGear’s recommendation, the Project Teams first prioritized and then broke down the backlog of test cases: (a) those test cases needing to be automated, and (b) those test cases that were identified as failed due to valid build change, into 4 separate tasks for each test case:
- Manual test case design/stakeholder review/signoff
- One-to-one between Manual/Domain Engineer and Automation Engineer
- Test Case Automation with stakeholder review/signoff
- Test script stabilization, across all required platforms
As a result of making these process, data capture, analysis, tracking, and reporting metrics changes, the Sr. QA Manager was able to begin to account for and track the actual “time to design,” to maintain, and to execute Automation frameworks, including iteration post mortem lessons learned––to continually improve the process as the technical backlog was being worked down at a rate targeted and approved by management while greatly reducing any new additions to the backlog. Another huge benefit was that the data and metrics were now available and being tracked to actually account for QA and allowed for plugging in their own numbers, which were also showing improved velocity productivity and 8-week LRP actuals trending closer and closer to LRP estimates.
Summary
Executive-level reporting is the primary source of business intelligence that business leaders rely on to make more accurate, data-driven decisions that will help them to remain competitive in today’s market. QA reporting may sometimes overwhelm or confuse the leaders with no insights. In the Agile/SAFe/DevOps era, business leaders are looking for metrics that can indicate the business value of Software Testing and assess the value derived from their investment in Software Testing. In fact, the SAFe framework clearly defines the importance of creating value for both internal and external customers. This value stream must be quantified through metrics.
Test Automation needs significant time to design, execute, and maintain. Several organizations do not track the “time to design”, maintain, and execute tests—nor the cost per hour of those efforts. To stop “Flying Blind” when accounting for QA, you should ensure that track project and feature expansion year-over-year and a quarter on quarter-over-quarter to see what your project demand increases look like. Even though QA cannot scale 1:1 with Development, QA can hedge IT expansion with offshoring, cloud-based Automation tools, and infrastructure, as well as ensure right size resourcing, tools, infrastructure, and training are accounted for. Thus, when presenting QA metrics to business stakeholders, it’s important that you choose and present the right metrics that give transparency to your team’s work and accomplishments.