Letter from the Editor – March 2021

A while ago, I helped start a Software Quality Certificate Program as a part of the Software Engineering Program at the University of California, Santa Cruz Extension in Silicon Valley. I was on the Board of Advisors. While putting the curriculum together, a few people suggested a Measurement and Metrics course. Since I was teaching a few classes in the program, they asked me to write and lead the class. When they did, I did not realize how quickly––or how loudly––I replied “NO!”

The whole room froze and looked at me. I laughed. I told them that there was no way I would go near that class––not even with a 10-foot pole.

The problem with measurement is that every organization is unique, every product is at a different level of maturity and has its own meaningful measurement needs, team members have personal preferences to what informs their decisions, different tools have predesigned reports or dashboards of what they think you may want to know… what I’m getting at here is that it’s all context-driven. Reporting from QA and Test Teams serves many purposes, but reporting must also support the business goals. Thus, measurements and metrics must be relevant to these goals. The reporting needs to be actionable, meaning that the data informs decision-makers and arms them with what they need for:

  • Assessing product readiness (Go, no-go, not yet, etc.),
  • Ascertaining process efficiency (that testing and development are getting done as efficiently as needed), and
  • Accurately scoping project sizing, staffing, resourcing, tool needs, devices, etc.

Measurement and metrics reporting have a rocky history because measurements are easy to generate, so some people over-report, making it the reader’s job to see through and make sense of all of the data. Eliminating waste, being Lean, and cutting overhead is better management. “Just the Facts” and “Less is More” are better mantras for reporting for today.

Better than inventing some acronym, from my experience, the reporting you do must be:

  1. Correct. This may go without saying, but I could tell you horror stories of the incorrect or missing information or measurement leading to problems.
  2. Easy to generate or capture. Once you have defined the measurements to report to the team, they need to be easy to capture. If you regularly have to manually grab a piece of data from one tool and manually grab another piece of data from another tool, then put them into an Excel spreadsheet and calculate something to send the decision-makers or team members… think again. Capturing and calculating reporting needs to be automated.
  3. Understood. The team members that you are reporting this information to need to understand what it does and does not mean––and perhaps even what good numbers might look like and what bad numbers might look like. It is super common that measurements or metrics are reported and misjudged by various members of the team because they have never had them fully explained nor did they get an agreement on what is actually being measured and why.
  4. Used! The reporting is used for action. The right people do things, make decisions, make changes, get staff, get time, bless a release, or hold a release based on the numbers we are giving them. If it turns out that people are not making decisions based on these numbers…
  5. Reevaluate. Change them. If people are not taking action on what you report, if there is too much misunderstanding, or you are not getting the result you intended, get a different measure that’s easy to understand and that the team will be able to use for decision-making. Remember, there are no “best practices,” only continuous improvement.

Reporting is complex because it is always context-driven and often political. My biggest advice for reporting is to be Lean and be careful.

This issue is packed full of content aimed at helping QA leaders ensure their team’s efforts are actively (and accurately) communicated to business decision makers. Our cover story, How to Stop “Flying Blind” When Accounting for QA was written by LogiGear’s SVP of Sales Clayton Simmons and offers actionable insight for QA leaders who may be struggling to find the “perfect mix” of metrics when reporting to decision-makers. The Midpoint Crisis: How Automation Can Make More Manual Testing Work was written by Michael Larsen and explores how to prevent having to manually execute tasks as our Automation program matures. How to Be Seen was written by Kristin Jackvony and is aimed at entry-level QA professionals who are trying to find ways to make their work known and climb the corporate ladder. This issue’s Blogger of the Month, Wes Silverstein, explores the important role that QA has when it comes to planning Automation sprints in The Role of QA in Sprint Planning. Our infographic, 5 Incredibly Useful KPIs for Test Automation, explores 5 of our top-recommended testing metrics to use in your QA reports, including formulas and reasons for why they’re pertinent. Finally, check out the new features and functionality of TestArchitect Version 9.0 in TestArchitect Corner.

We hope you enjoy this issue––happy testing!

Michael Hackett
Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006). He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

The Related Post

Big and complex testing. What do these terms conjure up in your mind? When we added this topic to the editorial calendar, I had the notion that we might illustrate some large or complex systems and explore some of the test and quality challenges they present. We might have an article on: building and testing ...
How do you test software? How do you validate it? How do you find bugs? These are all good questions anyone on your project team or anyone responsible for customers may ask you. Can you articulate your test strategy─not your test process, but explain your approach to testing? I find that this can be a ...
In the November 2011 issue: Mobile Application Testing, I began my column with the statement, “Everything is mobile.” One year later the statement is even more true. More devices, more platforms, more diversity, more apps. It boggles the mind how fast the landscape changes. Blackberry has been kicked to the curb by cooler and slicker ...
There has been a tectonic shift in software development tools in just the past few years. Agile practices and increasingly distributed teams have been significant factors but, in my opinion, the main reason is a new and more intense focus on tools for testing driven by more complex software and shorter development cycles. There have ...
“Why do we need to understand a bunch of test methods? I write test cases from user stories or requirements, automate what I can and execute the rest manually, and its fine.” If this is your situation: good for you. If you are time crunched, if your automated tests have lost relevance, are hard to ...
DevOps can be a big scary thing. Culture change, constant collaboration— whatever that means— a big new set of tools… it’s a lot. What most teams want is to have a smooth running software development pipeline. I have stopped using the phrase “DevOps,” and now I say “Continuous Delivery.” There are many reasons for this.
Digital Transformation and IT Modernization projects have shifted into high gear during the COVID-19 pandemic. Tough on some teams is having to do more with less and speed up projects on reduced budgets due to the resulting COVID-19 business climate. On the other hand, other companies are adding funding and pressing the schedule under the ...
Hello everyone – I’m hoping each one of us is having a great October. This time of the year is always my favorite, with the changing of the seasons, Fall was always my favorite time of year; it signified change and renewal – but I don’t want to digress to much from what’s going on ...
On the whole, everyone wants to do a great job, have a better work environment, happy clients and customers, and to be employed by a company earning lots of money. All great goals! But this is not always the case. When it is not, you can suggest process improvements, better tool use, different estimating techniques, ...
Testers need to learn their craft and hone in on their skill set. That means building skills, sharpening their tools, and becoming creative detectives. There is no cookie-cutter tester and no best practice. The best circumstance is a fully-skilled, aggressive tester mixed with curiosity, nimbleness, and agility.
This is LogiGear magazine’s first issue on the big world of DevOps. DevOps is a very large topic. Just when you thought you were safe from more process improvement for a while—not so fast. There’s DevOps, Continuous Testing, Continuous Delivery and Continuous Deployment. In this issue, we are focusing on Continuous Testing, the part most ...
What is testing in Agile? It’s analogous to three blind men attempting to describe an elephant by the way it feels to them. Agile is difficult to define and everyone has their own perspective of what Agile is. When it comes to testing and Agile the rules are what you make them. Agile is ideas ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe