2010 – 2011 LogiGear Global Testing Survey Results – the Politics of Testing

Few people like to admit team dynamics and project politics will interfere with successful completion of a software development project. But the more projects you work on, the more you realize it’s very rare that technology problems get in the way. It’s always the people, project, planning, respect, communications issues that hurt development teams the most.

Historically, test teams bore the brunt of ill-will on projects in traditional development models─ they go last! Comments regarding test teams include, “They are slowing us down,” and “They are preventing us from hitting our dates.” And, when the test team misses the seemingly obvious bug, their entire work is suspect.

Even in Agile, issues of being “cross-functional,” inclusion in the estimation process, extremely lean [read: none] documentation can continue some political problems for people who test.

It is no accident that the first value of the agile manifesto is “Individuals and interactions over processes and tools.” People and how we get along with each other consistently have a larger impact of project success than any process or tool. Communications problems are most commonly cited as the biggest problems with software development.

I hope we have moved away from the worst work environments since we have so much information and experience that it is political and difficult work situations that badly impact productivity and work quality.

As I reviewed the results of this survey, it seems we still have far to go to fix team political problems. A better understanding and awareness of common political problems will help recognize them quicker and give you an opportunity to remedy situations before they get worse.

Duplicated responses were deleted.


1. What is the biggest pain area in your job?
· “Poorly Written or insufficient requirements leading to scope creep.”
· “Delayed release to QA creates immense testing pressure in terms of meeting deadline.”
· “Not enough resources for development or testing.”
· “Timetables to testing.”
· “Thinking alone with no help.”
· “Making teams understand the importance of QA participation early in the life cycle.”
· “Time pressures and lack of understanding in regards to testing by external parties.”
· “Estimating time to automate on clients’ perceptions.”
· “Educating people on the value of testing, tester skills, domain knowledge – for example, helping managers understand that quantifiable measures related to testing are generally weak and sometimes risky measures, especially when used independently (like number of bugs, or number of tests run/passed/failed, etc.). Demonstrating how to use qualitative measures more effectively and advise managers/stakeholders on trusting the intuitive assessments of senior testers to make decisions.”
· “Keeping track of schedules and keeping test cases up to date.”
· “Complete lack of any process at all – all projects done uniquely = chaos.”
· “Domain knowledge”
· “Production and testing environments are not 100% synced.”
· “Dealing with some executives who do not understand a thing about testing.”
· “Having other team members value my opinion when it comes to process improvement, as well as scheduling for releases correctly – giving unrealistic expectations and time frames.”
· “Explaining the GAP between theory and practical.”
· “Testing rare scenarios.”
· “Developer says, ‘This is not reproducible on my machine’ and marks the status back to ‘Can’t Reproduce’” I call him back and say ‘It’s 100 % reproducible on my machine. Come here, look and fix the problem.’ Sometimes have to reproduce the issue many times for better debugging and this eats up lot of time.”
· “Managing resources and data for testing. Such as Database servers, Different environments, etc.”
· “Under-funding for new projects.”
· “People’s unwillingness to choose testing as a career path.”
· “No clear career path.”
· “I don’t have access to the code base.”
· “Creation of reports.”
· “Balancing on-the-job learning with providing results.”
· “Over reliance on testing quality into a product, application or service.”
· “The need to repeatedly prove that independent testing adds value.”
· “Writing good test cases and having to retest the application due to late changes in requirements.”
· “Developers are not doing proper unit testing and give QA build late for testing. This results into – QA Team stays late.”
· “Not being trusted to know what I’m doing.”
· “Delivery dates, testing work is not measured or estimated”
· “Deliver documentation from regulatory and quality assurance perspective and not tester perspective.”
· “Not giving much time to do regression testing and also not giving time to developers to fix the defects.”
· “HR issues with certain personnel.”
· “Finding good pragmatic QA lead, who understands solutions architecture, solutions design and the value solutions bring to customers.”
· “Deadlines driving releases, not schedules or quality.”
· “Having to do the same things day in day out.”
· “Getting the project organized before code is produced and sent for testing.”
· “Switching in multiple projects daily.”
· “Changing requirements, unavailability of business experts to inform requirements gathering process.”
· “Designing the effective testcase”
· “Repeated releases due to very small changes in requirements.”
· “Receiving timely information.”

Analysis: The universal problems in software development continues to astound me. On one hand, most people think their team and company problems are unique—they rarely are. Secondly, the problems are so universal and well known—why haven’t organizations moved to fix such common issues that would have a big impact on productivity and quality?

Also, Testing and QA political problems are sometimes confined to testing but much more often a result of problematic corporate culture.

2. Does your development team value your work?

Response percent Response count
Yes 57.9% 73
Somewhat 38.1% 48
No 4% 5

Analysis: Most teams are valued. The response results of 58% is actually a low number here, the number should be closer to 100%!

3. Who is ultimately accountable for the high quality or poor quality of the product?(QA- quality assurance/guaranteeing quality)?

Everyone 53.8% 64
Project manager 16% 19
Testers 14.3% 17
Product team (marketing, business analyst) 7.6% 9
Upper corporate management 5% 6
Developers 3.4% 4

Analysis: The only correct answer here is “Everyone.” I often ask this question to clients and it is usually a much higher percentage. Everyone on the team has unique and important contributions to the level of quality. Any weak link will break the chain. We all share responsibility.

4. Who enforces unit testing?

Development Manager 37.8% 45
Developers 32.8% 39
Testers 16% 19
Project Manager 10.9% 13
Product Management 2.5% 3

Analysis: Over 70% responding that the Development Manager or Developers enforce unit testing is the correct answer. The 16% responding Testers is wrong.
Just in the past few years as Agile—particularly XP practices such as TDD— are finally becoming more common unit testing practices.

5. What is the most important criteria used to evaluate if the product is shippable?

Risk assessment 43.7% 52
Completion of test plan/sets of test cases 22.7% 27
Date 17.6% 21
Bug counts 14.3% 17
Code turmoil metrics 1.7% 2

Analysis: Interesting to see how the ship or release criteria differs greatly from company to company.

6. What is your perception of the overall quality of the product you release?

Good 58.1% 68
Average 21.4% 25
Excellent 17.1% 20
Poor 3.4% 4
Bad 0% 0

7. What pain does your product team have regarding quality? (You can select multiple answers.)

Insufficient chedule time 63.8% 74
Lack of upstream quality assurance (requirements analysis and review, code inspection and review, unit testing, code-level coverage analysis) 47.4% 55
Feature creep 38.8% 45
Lack of effective or successful automation 36.2% 42
Poor project planning and management 32.8% 38
Project politics 31% 36
Poor project communication 26.7% 31
Inadequate test support tools 25.9% 30
No effective development process (SDLC) with enforced milestones (entrance/exit criteria) 25.9% 30
Missing team skills 23.3% 27
Low morale 15.5% 18
Poor development platform 7.8% 9

Analysis: These responses speak for themselves. Insufficient schedule for testing, lack of upstream quality practices, feature creep, lack of effective automation anyone familiar with software development could have predicted this. A larger problem here is that we have had these same problems since 1985!

8. What is the most important value of quality?

Ease customer pain/deliver customer needs, customer satisfaction 52.1% 61
Better product (better usability, performance, fewer important bugs released, lower support, etc.) 40.2% 47
Save money during development (cut development and testing time to release faster) 7.7% 9

9. Who ultimately decides when there is enough testing?

Test team 35.6% 42
Project manager 31.4% 37
Upper management 14.4% 17
Product marketing, product, customer, biz analysts) 14.4% 17
Developers 4.2% 5

Analysis: Surprising answer here. Remember this survey is primarily made up of testers. I would have expected Product or Project Management decides as the number one answer.

10. How often does your testing schedule not happen according to plans?

Some/few 38.1% 45
Often 26.3% 31
Most 25.4% 30
Every project 6.8% 8
Never 3.4% 4

Analysis: Clearly more than half projects do not happen according to plan. Over 57% responded “Every, Most or Often” projects do not happen as planned. Have project planners not learned much in the past 20 years?

11. What are the main reasons test schedules do not happen according to plan?
· “Dev issues or dependencies on hardware.”
· “Scope Creep”
· “Requirements changes, greater than average defect count.”
· “Change of release content.”
· “Resource conflicts – our testing resources are not dedicated to testing, it is a function they perform along with their normal job functions.”
· “The deadlines for the delivery of the projects are short.”
· “Unstable or late deliverables, changing requirements or scope.” Jun 23, 2009 10:23 18. “Usually dev slip the date or too many problems (bugs).”
· “Emergency releases”
· “Unexpected events, additional workload, new directives, rework …”
· “Requirement changes delayed development and subsequently delayed testing. Since there will be no change in the production release date, the days for QA are shortened.”
· “Inconsistency in development process.”
· “Delay in releases from development team
Scope creep during testing phase
Rework of fixes due to lack of unit testing and poor
quality of the component released to QC”
· “Lack of quality Development and so the repetitive dev cycle.”
· “New Feature requests injected in between a sprint.”
· “Major bugs that prevents testing of some areas.”
· “Lack of understanding of new product/enhancements.”
· “Other, equally high-priority deliverables (in products/features in different ship vehicles).”
· “Unrealistic nd well vetted project plans, schedules and estimates.”
· “Lack of unit testing by dev team, due to which when build is given to testing team, then test team got blockers etc. This increases test cycles.
· “Never actually included in scheduling. Developers decide what goes into the next release based on development effort. Testing is an afterthought.”
· “Unexpected user issues that come up; problems with software that our product integrates with.”
· “Late turnover.”
· “Environment stability.”
· “Analysis/Code not complete on time for test to begin.”
· “Multiple developers putting in changes right before deadline.”
· “Multiple projects at one time.”
· “Lack of funds.”
· “Issues with product installation, server problems, unrealistic testing schedules assigned by upper management.”

Analysis: Scope creep is by far the number one answer. I removed duplicate answers. It comes by name names- all meaning addition of features after agreed upon schedules. Hopefully, as Agile, particularly SCRUM practices such as the Planning Game, Sprint Planning and a strong Product Owner become more common, one day scope creep will become just a bad memory.

12. How do you decide what to test or drop (prioritize coverage) if your testing schedule/resource is cut in half?
· “The CEO.”
· “What feature is critical and test positive cases not negative.”
· “Project Team Decision”
· “Focus on high risk areas that would cause customer dissatisfaction.”
· “Products’ basic functionality
Severity of corrections
Areas affected by corrections
Importance to customers
Importance to sales and marketing
Ability to simulate in test lab”
· “We don’t drop testing, we delay deployment of features if they cannot be tested.”
· “What code has been changed the most.”
· “Negotiate with developers and Project Management.”
· “Core test case coverage.”
· “Based on the historic data and feature change: what has been tested in previous releases and what new features has been introduced/changed.”
· “Depends on the requirements prioritization and their impact in de software functionality.”
· “Speak with the business to prioritize the most important/critical items to receive testing and/or personal experience of where the pain points are going to be (where do we often see the most issues with previous testing?).”
· “Project management and application owner.”
· “Knowledge of customer usage, therefore impact.”
· “Technology and business risk assessment. Mostly from the QA/Test team. Not ideal but the best out of a challenging situation.”
· “Combination of time required, business criticality and business priority.”
· “Discuss the Functionalities with Business Analyst/Product Manager and decide priorities.”
· “I think about how the customers are going to use the product. Anything which will affect all customers is the highest priority. Anything affecting individual customers is lower priority. Anything which requires a reset to the server (affects all customers) has a higher priority. Anything with a work around is a lower priority. The more important the customer, the more important their user stories.”
· “Items that need to be delivered for launch will be tested.”
· “These decisions are made project by project. We use customer feedback and knowledge of which features are most used by customers to decide.”
· “Focus on main functionality; leave out additional features that we don’t have time to test.”
· “Safety critical vs. non-safety critical.”
· “Risk Based Testing, risk categorization.”
· “Scrum planning.”
· “From bug history, new feature, customer usage.”
· “ROI”
· “Any test cases with dependencies to new feature are prioritized.”
· “Drop multiple passes on relatively simple changes; drop regression testing for portions of system that scheduled changes should not impact.”
· “Test main functionality and skip regression.”
· “Test cases are ranked H-M-L and we focus on the High. Usually the most desired feature has the most High ranks.Also, it may be decided to drop a feature and only focus on that feature and defer release of other feature. The business ultimately makes that decision of focus.”
· “Test features that are crucial for clients acceptance, and / or features which are complex and broad within the solution.”
· “Operational requirements.”
· “We usually don’t. We will throw more resources on a project.”

13. Is testing (the test effort, test strategy, test cases) understood by you project manager, product manager/marketing and developers?

Yes 75.7% 87
No 24.3% 28

Analysis: I often ask this question during my consulting work and it is always a very similar answer: testing is not well understood by a significant percentage of people in the development team.


1. How would you characterize the relationship between your team and the development teams?

Good 48.6% 53
Leading to better quality releases 24.8% 27
Adversarial 11% 12
Excellent 10.1% 11
Poor 2.8% 3
Hurts/reduces product quality 2.8% 3

Analysis: A positive response. Too bad over 16% of teams have poor, hurtful or adversarial relations with other groups. It is very good that over 83% have positive team relationships.

2. When and how are testing groups involved with the development team?

In adequate time to prepare a good test project 56.4% 44
Late 28.2% 22
Too late 9% 7
Too early 6.4% 5

3. Do you frequently get reviews and feedback of your work (e.g., test plans, test cases, test reports, bug reports, etc.) from the development team?

No 53.7% 36
Yes 46.3% 31

Analysis: More than half of the responding teams do not get regular feedback from their teams. Not good. In Agile, having an immediate and effective retrospective should give each and all team members feedback and suggestions from fellow team members on how everyone can do more productive work.

4. How is morale on the test team?

Good 38.9% 42
Up and down 38% 41
Happy, excellent 11.1% 12
Low 5.6% 6
Stressed 4.6% 5
Poor 1.9% 2

5. Does your group have adequate resource and time to do its job?

Yes 44.9% 48
No 55.1% 59

Analysis: Another disappointingly high percentage and yet another issue we have encountered in 1985. So, perhaps your team is adequately staffed but you need to do a better job at coverage and risk communication.

6. Who makes staffing decisions on the need for additional testing resources?

Product/ Project 70.1% 75
Test Team 29.9% 32

7. How do you think other groups would characterize the competency of your group?

Highly competent 46.8% 51
Adequately qualified 34.9% 38
Not competent enough 10.1% 11
Excellently qualified 8.3% 9
Unqualified 0% 0

Analysis: The results of this question are always interesting to me. Approximately 90% of respondents are characterized as qualified/competent adequately, highly or excellently. That is a great number and shows a maturation of our industry. In the past, test teams were often viewed as the least technically competent and least trained on the product team. Over the past two decades, Test and QA classes became more common and a change in the percentage of people hired into the industry have either been based on subject matter experts, echnical expertise or on QA and Test expertise leading to a much more qualified testing staff.

8. What would you like to change or improve concerning you team and team dynamics if you could (list as many as 3 items)?
· “More resource for long term not contract
· “Take the time to study the project”
· “Slow projects down, we often jump into ‘Small’ changes without sufficient review.”
· “Higher synchronization between test teams”
· “Disciplined and planned approach to optimize testing and improve productivity.”
· “Increase our team testers.
Increase the time for tests.
Give more training time to testers.”
· “More local testers.”
· “Much earlier involvement in design process, much less ‘test this now because its going out to the field next week.’”
· “Cross functional and domain expertise.”
· “Add resource, training.”
· “Above answers would vary from project to project, but common issues I deal with: 1) Lack of understanding of what testing can provide – incorrect expectations; 2) Lack of accountability for quality across project teams – making QA/test teams some kind of inadequate gatekeeper; 3) Improper use of test metrics and data”
· “1) Improve technical testing skills among the team
2) To have the project mgmt and dev. team to listen
to quality team in regards to quality
3) Implement automation, performance and security
testing in team of skills and tools.”
· “Better peer reviews, more motivation.”
· “Involve testers sooner in regards to requirements; more collaboration in general.”
· “Be more proactive in finding defects upstream.”
· “More communications, care about personal develop space, good environment”
· “Team interaction, team awareness on latest techniques and Open culture.”
· “Access to better testing support tools.”
· “- More resources for development to be able to pro-
vide better quality code
– More time for test automation to be written and implemented”
· “1.Make them think out of the box
2.Come up with new ideas to improve efficiency
3.Better communication with the developers”
· “Follow Agile methodologies and Work from the very first stage of Project.”
· “1. Stop changing requirements
2. Stop injecting feature requests in between a
· “More freedom for testers
Proper communication about updates and latest
Soft skills training
· “There is more to testing a system then validating that the GUI works. We need to test data processes and web and messaging services. Different kinds of testing require different skills”
· “1. Technical abilities
2. Testing knowledge
3. Professionalism (tester’s attitude)”
· “Appreciation of work.”
· “More actively collaborative work.”
· “Change the culture from a test Q in to build Q in. Not just words but real practices and disciplines designed to build it right the first time with lean practices.”
· “The attitude towards testing
Knowledge of testing technologies
Domain Knowledge”
· “Better cooperation with Operations (aka OPS Run)
Better testing for MPI models
More involvement with contracted projects”
· “Improve tester skills (test techniques, product knowledge)
-Higher motivation”
· “Respect a person’s work and give space to testing team.”
· “Have more employees in the role of QA leads, not consultants.”
· “One more engineer level QA person and one more tech level QA person.”
· “Centralize all QA under one umbrella organization
Independent budget for QA organization
Operate on a shared services model
Partnership with tool vendors”
· “More structured test cycles
Ability to stop releases if medium to high priority
defects are not fixed
More training”
· “There could be two of me.”
· “Understand customer better”
· “Team member trainings, More coordinated work and Openness in culture.”
· “1 – Senior management support
2 -Senior management support
3 – Senior management support”
· “Open Communication
Schedule Breathing Space
Strengthen Business Relationship”
· “The perception that QA doesn’t know what they’re doing because we find too many defects that delay a project; the perception that QA is a roadblock to be overcome or circumvented rather than a key element of any project; the perception that QA finding a defect is a NEGATIVE against a project.”
· “Our image to the company more time to test and implement better testing strategies.”
· “Would like team members to actively educate themselves on the base product functionality and cross train other members.”
· “QA needs to be a more integral role throughout the project Lifecycle.”
· “More personnel in the testing team
More automation and testing tools
Fully support the SDLC and testing of all
· “More employees
More time to test
Better tools”
· “Nearly everything.”
· “Need more resources.”
· “QA as a viable authority to allow for change or work in the queue (third leg of a stool with Dev, Business).”
· “Clearer roles and responsibilities.
QA work more with business for UAT.
Configuration manager needed.”
· “Better skillset.
More time for learning new skills.
Increased focus on QA as a craft that can be learned and developed.”
· “Get marketing and management to guard against
scope creeps.
Make better schedules so that every trip to the
restroom doesn’t affect it.”

8. Is the (office) working environment conducive to your productivity?

Yes 84.8% 89
No 15.2% 16

9. If you have other roles on the project, what are they? (You may select multiple answers.)

Test Only 44.1% 45
Manage project 42.2% 43
Write requirements 32.4% 33
Design/code test tools and harnesses for internal use 27.5% 28
Specify design, UI or user workflow 20.6% 21
Develop code for customers 6.9% 7

Analysis: Very interesting results with the field of testing and what is now included in a job of a tester. For Agile projects, these people are already “cross-functional!”

10. What is the PRIMARY method of regular team communication?

Email 68.4% 52
Yelling around the office 10.5% 8
IM 9.2% 7
Wiki 9.2% 7
Sharepoint 2.6% 2
Blog 0% 0

Analysis: Great that problematic IM is not the primary tool. However, having email still a primary method from about half of the respondents is problematic. Email has many, many problems for project communication and management. Wikis and “project pages” are much more effective.

11. What ALM/Team/Offshore communication tool do you use?

We built our own 25.6% 20
HP Quality Center 23.1% 18
Rational Suite (ReqPro-Rose-ClearCase…ClearQuest or combination) 5.1% 4
Jazz- Team Concert 2.6% 2
SourceForge 2.6% 2
CodeGear 1.3% 1
ScrumWorks 1.3% 1
Rally 1.3% 1
CollabNet 1.3% 1
MKS 0% 0

Show replies Other (please specify) 20
1. “Share Point” Apr 14, 2009 7:06 PM
2. “Testlink” Apr 15, 2009 6:37 AM
3. “JIRA” Apr 15, 2009 1:30 PM
4. “Bugzilla” May 21, 2009 1:51 AM
5. “email” Jul 24, 2009 4:51 AM
6. “Microsoft Communicator” Oct 28, 2009 7:19 PM
7. “jabber” Nov 12, 2009 1:43 PM
8. “Perforce” Nov 17, 2009 2:39 PM
9. “rally & Quality center” Nov 24, 2009 1:54 PM
10. “Drupal” Nov 27, 2009 2:17 AM
11. “Skype” Jan 25, 2010 8:32 AM
12. “Jira” Jan 26, 2010 4:35 PM
13. “Digite” Jun 24, 2009 11:19 AM
14. “Combination SCRUM and ActiveCollab” Feb 5, 2010 8:29 AM

12. What is the biggest project driver?

Schedule 56.2% 59
Cost/resources 25.7% 27
Risk/quality 18.1% 19

Analysis: Schedule remains, by far, the main project driver. We know from the questions above that projects rarely follow the planned schedule. When test times get compressed, test teams must have great coverage, risk analysis and reporting mechanisms.

13. Do you believe that there are product quality problems?

Yes 77.4% 82
No 22.6% 24

Analysis: This question is often a reality check on opinions of your product. In my consulting work, I have found most teams believe they release product with quality problems.

14. What is the biggest single quality cost on your project?

Testing by test team 24.1% 19
Support (patches, bug fixes, documentation and phone/help desk) 17.7% 14
Don’t know 16.5% 13
Building and maintaining effective test data 10.1% 8
Building and maintaining test environments 10.1% 8
Test automation (tool, writing and maintaining scripts) 10.1% 8
Requirements review, design review, requirements analysis 6.3% 5
Code walkthrough, inspection, review 3.8% 3
Other 1.3% 1

Analysis: Responses to this question are always interesting to me— it is a problem that 16% of test/quality respondents do not know or understand the cost of various quality activities.

I am very glad to see certain groups recognizing Support as the biggest quality cost on their products. Quality cost analysis must include post-release quality cost. The test strategy must consider reducing support costs.

Note: The next two questions concern regulatory compliance. I have read estimates that half of all software written is regulated. Regulatory compliance necessitates test strategies and documentation to pass audits.

15. Does your team directly have regulatory compliance (are you subject to external audit)?

No 65.8% 48
Yes 34.2% 25

16. If the answer to 19 is yes, you could get externally audited for which type compliance?

SOX 52.4% 11
SEC 19% 4
FDA 14.3% 3
FDIC 9.5% 2
DOT 4.8% 1
DOD 0% 0

17. If you could do anything to release higher quality product or service, what would that be?
· “More smart testing”
· “Spend more upfront time gathering client input into requirements. Stop rushing projects into development without due dillagenece on features and impacts.”
· “Implement and ALM tool (like Rational tools).”
· “Improved Project Management”
· “Improve the time to testing the product.”
· “Start testing earlier.”
· “Continuous improvement.”
· “More unit testing.”
· “Structured communication on new features (release notes would be nice!).”
· “Extract accurate quality criteria and measures from key project stakeholders and customers (if applicable).”
· “Hire skillful team
Buy testing tools
Improve SDLC”
· “Solidify and document requirements prior to design.”
· “More time to plan more thorough testing.”
· “Improve process.”
· “More automation testing.”
· “Proper and solid test case design and introduce a proper test process”
· “Interact with the product management and clients more”
· “Rebuilt software development process, add a measuremnt system and buy adequate tools for management and cotrol softwware process.”
· “1.Enforce the defined processes
2.Automate testing for regression”
· “Better Test Strategy. Including TDD”
· “More data testing”
· “Invest into training of testers.”
· “Make sure that all team members are aware of the main user scenarios that we’re trying to provide and to hire more testers.”
· “We use waterfall SDLC, changing to agile would seem to reduce many of our problems.”
· “Higher awareness of management for costs of bad quality.”
· “Hire more testing resources.”
· “Improve metrics to be more accurate about the quality of the product.
-Improve tester skills.”
· “Satisfying the customer requirement and giving more quality what is expected.”
· “Hire better qualified QA leads.”
· “Have more test environment flexibility to build and simulate a multitude of conditions”
· “Better scheduling of test time tools to automate regression testing.”
· “Manage scope.”
· “Better planning.”
· “Have entire team (pm, dev, ba, qa) all follow SDLC methodology.”
· “Survey customer.”
· “Integrate the Product management and QA tighter.”
· “Proper project funding/ scheduling for QA.”
· “Have clear customer facing quality goals and metrics that are used by all levels of management to drive for quality-first, to deliver on time with quality, efficiency and predictability.”
· “Reduce changes in requirements after some phase of testing at least”
· “Test acceleration tools.”
· “Clearer requirements and functional specifications and Tester understanding of user workflows.”
· “Get our internal customers to understand that the end product is only as good as the requirements they provide – quality isn’t the responsibility of one group, but of everyone at every step at every level.”
· “Give the testing team more authority to adjust schedules.”
· “Better, consistent training on the product and its complexities.”
· “Establish with client at the outset what they really want to use our product for -and then work this into our test plans. Conduct broader testing”
· “More reliable computers/software.”
· “Have QA run through the Project Lifecycle”
· “If we are developing new features, we should not mimic old bad behavior just because customers are accustomed to the bad behavior.”
· “Enforce testing in the early stages of the SDLC (test requirements through use cases, create automated tests for development to use during development)”
· “Expect more from the vendors.”
· “Test more.”
· “Extend the project schedule
Add more qualified testers
More regression testing
Build nightly and test”
· “Take more breaks.”
· “Spend more time on requirements.”
· “Re-analyze and redesign the test plans/cases.”
· “Improve people on applications/support side. They are overburdened, so they don’t learn the product, just memorize answers to commonly asked questions.”
· “Do the peer reviews effectively and meet the customer needs with low number of priority issues.”

LogiGear Corporation
LogiGear Corporation provides global solutions for software testing, and offers public and corporate software testing training programs worldwide through LogiGear University. LogiGear is a leader in the integration of test automation, offshore resources and US project management for fast, cost-effective results. Since 1994, LogiGear has worked with Fortune 500 companies to early-stage start-ups in, creating unique solutions to meet their clients’ needs. With facilities in the US and Viet Nam, LogiGear helps companies double their test coverage and improve software quality while reducing testing time and cutting costs.

The Related Post

I am Senior Vice President at LogiGear. My main work is consulting, training, and organizational optimization. I’ve always been interested in collecting data on software testing – it keeps us rooted in reality and not some wonkish fantasy about someone’s purported best practice! Just as importantly, many software development teams can easily become myopic as ...
TOOLS T1. What testing-support tools do you use? (Please check all that apply.) Response percent Response count Bug tracking/issue tracking/defect tracking 87.70% 64 Source control 54.80% 40 Automation tool interface (to manage and run, not write automated tests) 52.10% 38 Test case manager 50.70% 37 Change request/change management/change control system 47.90% 35 A full ALM ...
This survey on modern test team staffing completes our four-part 2017 state of software testing survey series. We’ll have more results and the “CEO Secrets” survey responses to publish in 2018.
Data was compiled and analyzed by Michael Hackett, LogiGear Senior Vice President. This is the first analysis of the 2010 Global Testing Survey. More survey results will be included in subsequent magazine issues.
METHODS M1. The test cases for your effort are based primarily on: Response percent Response count Requirements documents 61.3% 46 Discussions with users on expected use 2.7% 2 Discussions with product, business analysts, and marketing representatives 9.3% 7 Technical documents 4% 3 Discussions with developers 8% 6 My experience and subject or technical expertise 12% ...
Data was compiled and analyzed by Michael Hackett, LogiGear Senior Vice President. This is the sixth analysis of the 2010 Global Testing Survey Series. More survey results will be included in subsequent magazine issues. To read past surveys, visit https://magazine.logigear.com/category/issue/survey/. Part 1- The Home Team HT1. Do you outsource testing (outside your company)? Response percent ...
METRICS AND MEASUREMENTS MM1. Do you have a metric or measurement dashboard built to report to your project team? Response percent Response count Yes 69% 49 No 31% 22 Result analysis: Anything worth doing is worth measuring. Why would almost 1/3 of teams not measure? Is the work not important or respected? Does the team ...
Check out the results of our poll where we asked practitioners what software testing trends they think will dominate in 2019. You can barely go online today without being asked to respond to a poll. Many have a hook to a sale or to win a free phone. But, to cut to the point, many ...
Process The objective of this survey and analysis is to gather information on the actual state-of-the-practice in software testing today. The questions originate from software development team assessments I executed over the years. A process assessment is an observation and questioning of how and what you and your team does.
The target audience of the survey were black box testers. Please note that to these respondents, test automation is mainly about UI level automation, not unit, performance or load testing.
This survey takes an in-depth look at teams that practice DevOps and compares it to teams that don’t practice DevOps. For 2017, LogiGear is conducting a 4-part survey to assess the state of the software testing practice as it stands today. This is a 4-part series to mirror LogiGear Magazine’s issues this year.

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news