Methods and strategy have been my favorite topics since I started working in testing. It’s essentially engineering problem-solving. It’s both looking for efficiency and attempting to measure effectiveness. So, how do we develop a set of practices to solve our Software Testing engineering problems?

One aspect of Lean Software Development and Lean Manufacturing that I love so much is the idea of “amplify learning,” or continuous learning. An example of amplified learning is finding bugs: The more bugs you find, the more you learn, and the better you understand your system. However, Lean practices do not include the idea of “best practices.” Some people live by best practices, and I am regularly asked, “What are the best testing practices?” My response is always the same: there are no best practices. There are better practices, but they depend on the context of your Software Development practices, tools, customers, platform, skills, release cycles, and other factors; thus, this dependency gives way to my conclusion that there are no standard best practices. What this means is that as your product matures and you gain more knowledge on your product, platform, and customers, then your testing strategy must change. The idea that a team is doing what they do because it’s how they have always done it tells me they are not learning nor are they growing. Strategies are not static: they evolve and get fine-tuned.
Strategy questions are some of the most frequently asked questions in my consulting work.
From Developers, I get planning questions:
- How much testing is enough?
- What’s the best and easiest method to show that everything’s working?
- Do I have to unit test at all if I test everything at the API level?
From Test Engineers, I get a wide variety of questions:
- If they’re running all of these tests at the unit level, what do I have to test at the API or UI level?
- If I have automated tests in a headless browser, do I still need to test specific browsers?
- Do we have enough Automated Testing? Do we have too much Automated Testing?
- How can I get the “biggest bang for my buck” with Test Automation?
From Product Owners, I get the same questions I was asked 20 years ago:
- Why can’t my test team test faster?
- Why do they miss bugs?
- Why can’t we automate all of our testing?
The questions I love the most are:
- What’s the best method to use for the most coverage with a minimum number of test cases?
- How can I cut maintenance costs on my automated tests?
- I don’t know our users—how can I test?
With the exception of developers both doing more testing, these questions are testing questions across development methods. They don’t have to do with Agile, DevOps, or Continuous Integration/Continuous Delivery (CI/CD), and they are not unique to deploying faster, as we are today. The answers to all of these questions revolve around developing a strategy and being able to articulate:
- The strategy and communicate it to the team
- That the team of Developers, Product Owners, and Test Engineers all understand what is being tested at what level and how
- What will be automated, what will be done manually, & what the biggest risks are
The overarching issue here is that you can go through an entire Software Engineering Program at a University and learn little to nothing about test methods and strategy. Most Test Engineers don’t come with built-in skills; they don’t always come with a working knowledge of 5 or 6 different test methods to tackle common or complex testing problems. This means you must solve them with 1 or 2 basic test methods. What I find is that most teams will validate acceptance criteria and then ‘mess around’ (i.e. perform Ad Hoc Testing) with the function for a while, and call it done. However, there are a great number of courses, programs, and certifications surrounding Software Testing today that can give you a great head start—there is no excuse for not knowing multiple ways to build a strategy these days.
So, this brings us to our March issue of the LogiGear Magazine: Smarter Testing Strategies for the Modern SDLC. This issue features some great articles that can start the conversation of, “Do my testing strategies need some tweaks?” Our cover story, Smoke Testing: An Exhaustive Guide for a Non-Exhaustive Suite delves deep into Smoke Testing, highlighting some better practices, and offering some tips and tricks. Blogger of the Month explores all of the necessary components of an effective Mobile Testing strategy. Considerations for Automating Testing for Data Warehouse and ETL Projects explains just what ETL Testing is, as well as how and why you should automate it. TestArchitect Corner features an in-depth, step-by-step guide regarding automating Cross-Browser Testing, and our infographic, How to Develop a Test Automation Strategy provides a roadmap for ensuring your Automation strategy is effective from the start. Finally, Leader’s Pulse offers some advice for managers and leaders who may be struggling with promoting productivity within their team.
But, as I stated earlier: there are no overarching best practices. While this magazine issue is full of useful information, it is not the tell-all of the perfect testing practice. Rather, use this issue as a means to “amplify learning.” With that being said, please enjoy our newest issue of the LogiGear Magazine!