2010 – 2011 LogiGear Global Testing Survey Results – Metrics and Measurements

METRICS AND MEASUREMENTS

MM1. Do you have a metric or measurement dashboard built to report to your project team?

Response percent Response count
Yes 69% 49
No 31% 22

Result analysis: Anything worth doing is worth measuring. Why would almost 1/3 of teams not measure? Is the work not important or respected? Does the team not care about the work you do?

I am not measurement obsessed but when test groups do not report measurements back to the project team, it is very often the sign of a bigger problem.
MM2. If yes, are these metrics and measurements used or ignored?

Response percent Response count
This information along with schedule decides product release 43.40% 23
Some attention is paid to them 26.40% 14
This information decides product release 18.90% 10
Minimal attention is paid to them 5.70% 3
They are ignored 5.70% 3

Result analysis: It is good to see that over 60% of teams reporting metrics use the information to decide release. Test team metrics should help decide product release.

As with the previous question, if the project team is not paying attention to test measurements, that is often the sign of a problem.


MM3. What testing specific metrics do you currently collect and communicate? (You may select multiple answers.)

Response percent Response count
Test case progress 84.10% 53
Bug/defect/issue metrics (total # open, # closed, # high priority, etc.) 82.50% 52
Requirements coverage 49.20% 31
Defect density 34.90% 22
Defect aging 25.40% 16
Root cause analysis 25.40% 16
Code coverage 22.20% 14
Defect removal rates 22.20% 14
Requirements stability/requirements churn 17.50% 11
Hours tested per build 11.10% 7

MM4. What methods/metrics do you use to evaluate the project status? (Comments from respondents.)

  1. “Too many.”
  2. “Bug Rate Trends and Test Case Coverage (Feature and Full Regression)”
  3. “Test case progress, defect status.”
  4. “Earned & Burned”
  5. “Number and severity of remaining open bugs. ’Finger in the air’ (tester intuition that some areas need more testing).”
  6. “Executed test cases.”
  7. “Defect counts and severity along with test case completion.”
  8. “Bug/defect/issue metrics (total # open, # closed, # high priority, etc.)”
  9. “Test case progress, defect density, and requirement coverage.”
  10. “Requirement coverage and test completed.”
  11. “Bugs found vs. fixed.”
  12. “Test coverage and defect metrics.”
  13. “Defects open along with test case execution progress.”
  14. “None.”
  15. “Are all the features ‘tested.'”
  16. “Track change proposals and outcomes (P/F) for all changes by project, by application, by developer, and by week.”
  17. “Exit criteria are agreed upon upfront, then metrics report progress against those. “
  18. “Test case complete %, pass/fail %, # of high-severity bugs.”
  19. “Schedule deviation, defects detected at each stage of project.”

MM5. Do you collect any metrics to evaluate the focus of the test effort, that is, to audit if you are running the right tests?

Response percent Response count
Yes 56.9% 37
No 43.1% 28

Result analysis: It is a step higher in responsibility and ownership when test teams evaluate their own work for effectiveness. Good work!
MM6. Do the metrics you use most help you:

Response percent Response count
Release better product 31.30% 20
Improve the development and test process 26.60% 17
Do more effective/efficient testing 23.40% 15
They do not help 18.80% 12

Result analysis: With 80% respondents using metrics to improve is great! More teams can be using metrics to point out problems, improve risk reporting, and give greater visibility into testing. Also, if your team is not capturing any measurements for the difficult reasons, it is safe to say your work is not respected, no one cares, or the team is purely schedule driven regardless of what testing finds. I recommend you start a metrics program to improve your job skills!
MM7. Do you measure regression test effectiveness?

Response percent Response count
Yes 59.1% 39
No 40.9% 27

Result analysis: Regression test effectiveness is a growing issue in testing. Numerous teams have been doing large scale test automation for many years now. Regression suites can become large, complex or difficult to execute. Many of the regression tests may be old, out of date, or are no longer effective. Yet, teams are often afraid to reduce the amount of regression tests.

At the same time, running very large regression test suites can take up too much bandwidth and impede a project. If you are having problems with your regression tests, start investigating their effectiveness.

MM8. How much of an effort is made to trace requirements directly to your test cases and defects?

Response percent Response count
The test team enters requirements into a tool and traces test cases and bugs to them 41.50% 27
We write test cases and hope they cover requirements; there is no easy, effective way to measure requirements coverage 18.50% 12
The product/marketing team enters all requirements into a tool. Test cases and bugs are traced back to the requirements. Coverage is regularly reported 15.40% 10
We do not attempt to measure test coverage against anything 15.40% 10
We trace test cases in a methodical, measurable, effective way against code (components, modules or functions) 9.20% 6

Result analysis: Tracing requirements to a test case does not guarantee a good product. Measuring requirements coverage has become an obsession of some teams.

The big issue for teams doing this is that the test case can only be as good as the requirements. Gaps in the requirements, incomplete or even bad requirements will help assure a problem product. Tracing test cases to problematic requirements will do no one any good.

This practice can be really useful for measuring requirements churn, bug density, or root cause analysis. Tracing or mapping requirements can be a good method of assessing the relevance of your tests.
MM9. If code coverage tools are used on your product, what do they measure?

Response percent Response count
Code-level coverage is lines of code, statements, branches, methods, etc. 55.3% 21
Effectiveness of test cases (test cases mapped to chunks/blocks/lines of code) 44.7% 17

MM10. How do you measure and report coverage to the team

Response percent Response count
Test plan/test case coverage 45.30% 29
Requirements coverage 25% 16
We do not measure test coverage 15.60% 10
Code coverage 7.80% 5
Platform/environment coverage 6.30% 4
Data coverage 0% 0

Result analysis: Coverage, however you define this is crucial to report to the team. It is the communication of where we are testing. This is the crux of the discussion of what is enough testing.

Michael Hackett
Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006). He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

The Related Post

In this installment of the 2010-2011 Global Testing Survey, we analyze the demographics of the more than 100 respondents from 14 countries. The next and final installment will analyze the “For Managers” section of the survey.
Check out the results of our poll where we asked practitioners what software testing trends they think will dominate in 2019. You can barely go online today without being asked to respond to a poll. Many have a hook to a sale or to win a free phone. But, to cut to the point, many ...
METHODS M1. The test cases for your effort are based primarily on: Response percent Response count Requirements documents 61.3% 46 Discussions with users on expected use 2.7% 2 Discussions with product, business analysts, and marketing representatives 9.3% 7 Technical documents 4% 3 Discussions with developers 8% 6 My experience and subject or technical expertise 12% ...
This survey on modern test team staffing completes our four-part 2017 state of software testing survey series. We’ll have more results and the “CEO Secrets” survey responses to publish in 2018.
Michael Hackett looks at questions posed to managers in the final installment of our 2010-2011 Global Survey results.
Few people like to admit team dynamics and project politics will interfere with successful completion of a software development project. But the more projects you work on, the more you realize it’s very rare that technology problems get in the way. It’s always the people, project, planning, respect, communications issues that hurt development teams the ...
The target audience of the survey were black box testers. Please note that to these respondents, test automation is mainly about UI level automation, not unit, performance or load testing.
LogiGear strives to keep its finger on the pulse of the latest trends in Software Testing. During this defining moment in history, we want to hear from you about how your work has been impacted by the pandemic in this quick 5 minute survey.
Data was compiled and analyzed by Michael Hackett, LogiGear Senior Vice President. This is the sixth analysis of the 2010 Global Testing Survey Series. More survey results will be included in subsequent magazine issues. To read past surveys, visit https://magazine.logigear.com/category/issue/survey/. Part 1- The Home Team HT1. Do you outsource testing (outside your company)? Response percent ...
Michael Hackett discusses the results of the seventh installment of the Global Surveys focusing on common training in software testing.
I am Senior Vice President at LogiGear. My main work is consulting, training, and organizational optimization. I’ve always been interested in collecting data on software testing – it keeps us rooted in reality and not some wonkish fantasy about someone’s purported best practice! Just as importantly, many software development teams can easily become myopic as ...
Data was compiled and analyzed by Michael Hackett, LogiGear Senior Vice President. This is the first analysis of the 2010 Global Testing Survey. More survey results will be included in subsequent magazine issues.

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe