Testing a Mission Critical System: The Way We Do it

For mission-critical applications, it’s important to frequently develop, test, and deploy new features, while maintaining high quality. To guarantee top-notch quality, you must have the right testing approach, process, and tools in place.

I’m currently working as an offshore consultant to a tier one retailer in the USA. The client is a very tough and demanding one. All of these factors make the project a mission critical one. So let’s see how we test the system and what the approaches we take are.

The latest trend in QA Testing is called “shift-left testing”. Simply put, it’s where we move all the QA related activities to the beginning of the sprint. In traditional approaches, most of the QA activities begin after the development work is completed. So they were more focused on defect finding. But the cost of fixing those defects were high because all the defects were identified at the end of the sprint.

Advantages of Shift-Left Testing

Since we are moving ahead with modern world concepts and techniques, we are now focusing more on defect prevention rather than defect finding. That means our work now starts in the early stages of the sprint as soon as requirement gathering starts. We review the user stories and screen mock-ups prepared by our business analyst (BA) team. And we report if these are not aligned with the requirement or it differs from our understanding of the requirement. We brainstorm with both the development team and BA team until we get the finalized requirements. The objective of this exercise is to bring all the teams into the same understanding of the requirement. It also aligns with the goal of DevOps, which is to improve collaboration between business stakeholders, and application development and operations teams.

While the development team starts their design and development processes, we start our test scenario design task concurrently. We use techniques like mind mapping and Functional Specification Data Mapping (FSDM) to capture the requirements correctly into our test scenarios. Once we complete that, we will send them to the development and BA team for review. If needed, we will have walk-through sessions with them as well. In the meantime, the QA team will start to create test cases from those test scenarios. If there are any alterations or valid feedback from either the development or BA team, we will incorporate them into our test cases. Manual test cases and Automation test scripting are performed simultaneously.

Testing Activities Throughout the Cycle

With the nature of our application, we are more focused on API Automation. It covers ground more quickly than UI automation. So as soon as we get a working environment with the APIs deployed, we start scripting. Most of the time, this environment will be a local development environment. Once we receive the API documentation, we can finalize our automation scripts by adding the remaining assertions. Since most of the tasks happen simultaneously, test case creations as well as scripting tasks will also be completed by the time of development completion.

Another important activity we perform is “peer testing”. We test the underdeveloped application on local development environments. Whatever the features that developers have completed, the QA team does high-level testing on them. We are more focused on the application functionality rather than UI. Of course, if we see an obvious UI issue, we report it. But we pay more attention towards the functionality. Whatever issues we find at this phase are, we report them quickly to the development team in a group chat. We also add them to a Google spreadsheet for tracking purposes so we can get them fixed quickly and tested at the same time, rather than waiting for a whole release cycle to get the defect fixed in post-release. Since the release is not an official one, whatever the bugs we find, do not go into the official report either. The target is to find and fix the bugs in the early stages. This is a very important milestone on the journey towards defect prevention.

After the development team completes the development and unit testing, they send an official QA release to the QA team. We use a common release note template for all the applications, which was also a product of the QA team. Once a majority of the API related functionalities have been automated, we run them overnight through our CI environment. The next morning, we will start with verifying the Automation status report and re-run the failed test cases. If we find any issues, they will be tracked in our official defect tracking system. UI testing will be more focused on happy path since we have covered all the negative test cases through the API automation. So testers will get more time to do exploratory testing.

Root cause analysis will be done after each major release. We will decide whether to go for another deployment or move the defects to the backlog. For this decision, we take into account the facts: like severity, priority of the defects, the importance of the feature, and also how soon this feature will be used in production. We also maintain a root cause analysis report for each major release. Whatever the mitigation actions that need to be taken will also be included in the same report. This report will be used for future reference.

Once the testing work is complete, we share our test results with the client. Those are needed to get the managerial approval for the production deployment. The deployment will be performed by the cloud ops team, but both the Dev and QA teams will also participate in the deployment process.

Once the application gets deployed, the QA team will perform a high-level verification to make sure all the new features are included and the already existing functionality isn’t broken. This will conclude a successful production deployment.

Sankha Jayasooriya
Sankha Jayasooriya is an IT Professional with more than 8 years of experience in the Software Quality Assurance field. He is an ISTQB certified professional specialized in service level testing, automated testing, and manual testing. His areas of domain expertise extend to retail, innovation, banking and finance, enterprise software, robotics, and mobile testing. Sankha is a co-author of the “Multi-Domain Supported and Technology Neutral Performance Testing Process Framework” white paper and is also a regular blogger on Genius Quality—Medium.

The Related Post

People who follow me on twitter or via my blog might be aware that I have a wide range of interests in areas outside my normal testing job. I like to research and learn different things, especially psychology and see if it may benefit and improve my skills and approaches during my normal testing job. ...
One of the most dreaded kinds of bugs are the ones caused by fixes of other bugs or by code changes due to feature requests. I like to call these the ‘bonus bugs,’ since they come on top on the bug load you already have to deal with. Bonus bugs are the major rationale for ...
“Combinatorial testing can detect hard-to-find software faults more efficiently than manual test case selection methods.” Developers of large data-intensive software often notice an interesting—though not surprising—phenomenon: When usage of an application jumps dramatically, components that have operated for months without trouble suddenly develop previously undetected errors. For example, newly added customers may have account records ...
In today’s mobile-first world, a good app is important, meaning an effective Mobile Testing strategy is  essential.  
Last week I went to StarWest as a presenter and as a track chair to introduce speakers. Being a track chair is wonderful because you get to interface more closely with other speakers. Anyway…one of the speakers I introduced was Jon Bach. Jon is a good public speaker, and I was pleasantly surprised that he ...
Introduction All too often, senior management judges Software Testing success through the lens of potential cost savings. Test Automation and outsourcing are looked at as simple methods to reduce the costs of Software Testing; but, the sad truth is that simply automating or offshoring for the sake of automating or offshoring will only yield poor ...
With this edition of LogiGear Magazine, we introduce a new feature, Mind Map. A mind map is a diagram, usually devoted to a single concept, used to visually organize related information, often in a hierarchical or interconnected, web-like fashion. This edition’s mind map, created by Sudhamshu Rao, focuses on tools that are available to help ...
Back from more training, I was up at a client in Bellevue and really enjoyed teaching a performance class to a world class testing organization. I found that the students were very receptive to many of the concepts and ideas that the class offers.
Introduction Many companies have come to realize that software testing is much more than a task that happens at the end of a software development cycle. They have come to understand that software testing is a strategic imperative and a discipline that can have a substantial impact on the success of an organization that develops ...
March Issue 2019: Leading the Charge with Better Test Methods
Test plans have a bad reputation, and perhaps, they deserve it! There’s no beating around the bush. But times have changed. Systems are no longer “black boxes” where QA Teams are separated from design, input, and architecture. Test teams are much more technically savvy and knowledgeable about their systems, beyond domain knowledge. This was an old ...
  Explore It! is one of the very best software testing books ever written. It is packed with great ideas and Elisabeth Hendrickson’s writing style makes it very enjoyable to read. Hendrickson has a well-deserved reputation in the global software testing community as someone who has the enviable ability to clearly communicate highly-practical, well-thought-out ideas. ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe