2010 – 2011 LogiGear Global Testing Survey Results – Test Methods

METHODS

M1. The test cases for your effort are based primarily on:

Response percent Response count
Requirements documents 61.3% 46
Discussions with users on expected use 2.7% 2
Discussions with product, business analysts, and marketing representatives 9.3% 7
Technical documents 4% 3
Discussions with developers 8% 6
My experience and subject or technical expertise 12% 9
No pre-writing of test cases, I test against how the application is built once I see it 2.7% 2
Guess work 0% 0

Result analysis: This confirms conventional wisdom. Over 60% of the respondents list requirements documents as the basis of their test cases. This brings up two important discussion points: 1) test cases can only be as good as or as thorough as the requirements and 2) past survey results exhibit that most testers are hired because of their subject matter expertise. This subject matter expertise is the primary basis for test case development for a far-distant 12%.

Some test teams complain regularly about requirements documents they receive. I would assume that reliance on subject matter expertise would have been a more common basis for test case development.


M2. Are you executing any application-level security and/or usability testing?

Response percent Response count
Yes for both 42.1% 32
Yes for usability 23.7% 18
No for both 23.7% 18
Yes for security 10.5% 8

 

M3. Are you conducting any performance and scalability testing?

Response percent Response count
Yes 74% 54
No 26% 19

 

M4. Does your group normally test for memory leaks?

Response percent Response count
No 52.1% 38
Yes 47.9% 35

 

M5. Does your group normally test for buffer overflows?

Response percent Response count
No 56% 42
Yes 44% 33

 

M6. Does your group normally test for multi-threading issues?

Response percent Response count
No 50.7% 38
Yes 49.3% 37


M7. Do you do any API testing?

Response percent Response count
Yes 62.7% 47
No 37.3% 28

Result analysis M2- M7: As test engineers, you need to know what types of testing need to be executed against your system even if you are not the person or team executing the tests. For example, performance tests are often executed by a different team separate from those that execute functional tests. However, your knowledge of test design and data design will help those executing these tests.

Knowledge in other tests can also help cut redundancy and shorten a test cycle. Most importantly, it becomes a serious problem if you are not executing these tests because you don’t know how or hope someone else will take over. If you are not doing memory tests because you think or hope the developers are, this is also a mistake.

API testing can find bugs earlier in the development cycle and has easier defect isolation. If API testing is not being practiced, you should have a good reason.


M8. What percentage of testing is exploratory/ad hoc and not documented?

Response percent Response count
10 – 33% (important part of our strategy) 46.7% 35
Less than 10% (very little) 26.7% 20
33 – 66% (very important, approximately half, more than half) 13.3% 10
66% – 99% (our most important test method) 10.7% 8
0% (none) 2.7% 2
100% (all our testing is exploratory) 0% 0

Result analysis: With almost half responding that 10 -33% of exploratory testing plays an important part of their strategy, my biggest concern is that the project team knows and understands your use of exploratory testing and the difficulty in measuring it.

The high percentage calling it important and the difficulty measuring exploratory testing often leads to incorrectly sizing a test project or increasing risk in cutting schedule time allotted for exploratory testing.

I expected a bigger reliance on exploratory testing with only over a quarter responded, “less than 10 %.” Most team teams say they find most of their bugs running exploratory tests as opposed to validation test cases. This may still be true, but many test teams may lack the time to do exploratory tests.


M9. How effective is your exploratory/ad hoc testing?

Response percent Response count
Somewhat effective, it is useful 54.8% 40
Very effective, it is our best bug finding method 39.7% 29
Not effective, it is a waste of time/money testing 5.5% 4

Result analysis: Almost 40% said their exploratory/ad hoc testing very effective and the best bug finding method. I agree and hope you communicate to your teams how effective it is and your reliance on it.


M10. How is exploratory/ad hoc testing viewed by project management?

Response percent Response count
Somewhat effective, it is useful 58.6% 41
Essential, necessary for a quality release 20% 14
Very effective, it is our best bug finding method 11.4% 8
Not effective, it is a waste of time/money testing 10% 7

Result analysis: Close to 60 % of respondents say management views the strategy as somewhat effective. In the previous questions, nearly the same percentage saw the testing as useful. This surprises me. Very often testers see ad hoc testing as more valuable than management who often see it as immeasurable and unmanageable.


M11. What is the goal of testing (test strategy) for your product?

Response percent Response count
Validate requirements 34.20% 25
Find bugs 26% 19
Improve customer satisfaction 23.30% 17
Cut customer support costs/help desk calls 8.20% 6
Maximize code level coverage 5.50% 4
Improve usability 1.40% 1
Improve performance 1.40% 1
Comply with regulations 0% 0
Test component interoperability 0% 0

Result analysis: The number one answer for what is the goal of testing is validating requirements. This is a surprise. Typically, finding bugs is seen by testers as the goal and management, business analysts or marketing see validating requirements as the main goal and job of test teams.

Even with this, about half the respondents see finding bugs and improving customer satisfaction as the goal of testing. We see a few times in this survey a large reliance on requirements as the basis of testing. This can be a problem with anything less than great requirements!
M12. Which test methods and test design techniques does your group use in developing test cases? (You can choose more than one answer.)

Response percent Response count
Requirements-based testing 93.20% 69
Regression testing 78.40% 58
Exploratory/ AdHoc testing 68.90% 51
Scenario-based testing 56.80% 42
Data driven testing 40.50% 30
Equivalent class partitioning and boundary value analysis 27% 20
Forced error testing 25.70% 19
Keyword/Action-based testing 17.60% 13
Path testing 16.20% 12
Cause-effect graphing 12.20% 9
Model-based testing 10.80% 8
Attack-based testing 9.50% 7

M13. How do you do regression testing?

Response percent Response count
Both 47.30% 35
Manual 33.80% 25
Automated 14.90% 11

Result analysis: For 33% of respondents, regression testing is purely manual. I see this commonly in development teams. There are so many good test automation tools on the market today that can be used on more platforms that teams not automating their regression tests ought to re-examine test automation. For all testers, test automation is a core job skill.
M14. Is your test environment maintained and controlled so that it contributes to your test effort?

Response percent Response count
Yes 83.1% 49
No 16.9% 10

M15. Are there testing or quality problems related to the environments with which you test?

Response percent Response count
Yes 64.4% 47
No 35.6% 26

M16. Are there testing or quality problems related to the data with which you test?

Response percent Response count
Yes 63% 46
No 37% 27

Result analysis M14- M16: Controlling test environments and test data is essential. Environments and data leading to testing problems is very common and very problematic! Building and maintaining great test environments and test data needs time and investment.

These answers confirm what I see in companies regularly─ problems in environments and data not getting resolved. With some time and perseverance, fixing these would greatly improve the effectiveness of testing and the trust of the test effort.

Michael Hackett
Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), and Global Software Test Automation (Happy About Publishing, 2006). He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

The Related Post

For 2017 LogiGear is conducting a 4-part survey to assess the state of the software testing practice as it stands today. This is not the first survey that we’ve launched; LogiGear conducted one very large state of the practice testing survey in 2008. It was much larger, with over 100 questions. Nearly a decade later, ...
TOOLS T1. What testing-support tools do you use? (Please check all that apply.) Response percent Response count Bug tracking/issue tracking/defect tracking 87.70% 64 Source control 54.80% 40 Automation tool interface (to manage and run, not write automated tests) 52.10% 38 Test case manager 50.70% 37 Change request/change management/change control system 47.90% 35 A full ALM ...
METRICS AND MEASUREMENTS MM1. Do you have a metric or measurement dashboard built to report to your project team? Response percent Response count Yes 69% 49 No 31% 22 Result analysis: Anything worth doing is worth measuring. Why would almost 1/3 of teams not measure? Is the work not important or respected? Does the team ...
Data was compiled and analyzed by Michael Hackett, LogiGear Senior Vice President. This is the first analysis of the 2010 Global Testing Survey. More survey results will be included in subsequent magazine issues.
Michael Hackett discusses the results of the seventh installment of the Global Surveys focusing on common training in software testing.
Michael Hackett looks at questions posed to managers in the final installment of our 2010-2011 Global Survey results.
This survey takes an in-depth look at teams that practice DevOps and compares it to teams that don’t practice DevOps. For 2017, LogiGear is conducting a 4-part survey to assess the state of the software testing practice as it stands today. This is a 4-part series to mirror LogiGear Magazine’s issues this year.
I am Senior Vice President at LogiGear. My main work is consulting, training, and organizational optimization. I’ve always been interested in collecting data on software testing – it keeps us rooted in reality and not some wonkish fantasy about someone’s purported best practice! Just as importantly, many software development teams can easily become myopic as ...
Complete 2010 – 2011 Global Survey Results LogiGear Corporation LogiGear Corporation provides global solutions for software testing, and offers public and corporate software-testing training programs worldwide through LogiGear University. LogiGear is a leader in the integration of test automation, offshore resources and US project management for fast and cost-effective results. Since 1994, LogiGear has worked ...
Check out the results of our poll where we asked practitioners what software testing trends they think will dominate in 2019. You can barely go online today without being asked to respond to a poll. Many have a hook to a sale or to win a free phone. But, to cut to the point, many ...
This survey on modern test team staffing completes our four-part 2017 state of software testing survey series. We’ll have more results and the “CEO Secrets” survey responses to publish in 2018.

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe