Testing a Mission Critical System: The Way We Do it

For mission-critical applications, it’s important to frequently develop, test, and deploy new features, while maintaining high quality. To guarantee top-notch quality, you must have the right testing approach, process, and tools in place.

I’m currently working as an offshore consultant to a tier one retailer in the USA. The client is a very tough and demanding one. All of these factors make the project a mission critical one. So let’s see how we test the system and what the approaches we take are.

The latest trend in QA Testing is called “shift-left testing”. Simply put, it’s where we move all the QA related activities to the beginning of the sprint. In traditional approaches, most of the QA activities begin after the development work is completed. So they were more focused on defect finding. But the cost of fixing those defects were high because all the defects were identified at the end of the sprint.

Advantages of Shift-Left Testing

Since we are moving ahead with modern world concepts and techniques, we are now focusing more on defect prevention rather than defect finding. That means our work now starts in the early stages of the sprint as soon as requirement gathering starts. We review the user stories and screen mock-ups prepared by our business analyst (BA) team. And we report if these are not aligned with the requirement or it differs from our understanding of the requirement. We brainstorm with both the development team and BA team until we get the finalized requirements. The objective of this exercise is to bring all the teams into the same understanding of the requirement. It also aligns with the goal of DevOps, which is to improve collaboration between business stakeholders, and application development and operations teams.

While the development team starts their design and development processes, we start our test scenario design task concurrently. We use techniques like mind mapping and Functional Specification Data Mapping (FSDM) to capture the requirements correctly into our test scenarios. Once we complete that, we will send them to the development and BA team for review. If needed, we will have walk-through sessions with them as well. In the meantime, the QA team will start to create test cases from those test scenarios. If there are any alterations or valid feedback from either the development or BA team, we will incorporate them into our test cases. Manual test cases and Automation test scripting are performed simultaneously.

Testing Activities Throughout the Cycle

With the nature of our application, we are more focused on API Automation. It covers ground more quickly than UI automation. So as soon as we get a working environment with the APIs deployed, we start scripting. Most of the time, this environment will be a local development environment. Once we receive the API documentation, we can finalize our automation scripts by adding the remaining assertions. Since most of the tasks happen simultaneously, test case creations as well as scripting tasks will also be completed by the time of development completion.

Another important activity we perform is “peer testing”. We test the underdeveloped application on local development environments. Whatever the features that developers have completed, the QA team does high-level testing on them. We are more focused on the application functionality rather than UI. Of course, if we see an obvious UI issue, we report it. But we pay more attention towards the functionality. Whatever issues we find at this phase are, we report them quickly to the development team in a group chat. We also add them to a Google spreadsheet for tracking purposes so we can get them fixed quickly and tested at the same time, rather than waiting for a whole release cycle to get the defect fixed in post-release. Since the release is not an official one, whatever the bugs we find, do not go into the official report either. The target is to find and fix the bugs in the early stages. This is a very important milestone on the journey towards defect prevention.

After the development team completes the development and unit testing, they send an official QA release to the QA team. We use a common release note template for all the applications, which was also a product of the QA team. Once a majority of the API related functionalities have been automated, we run them overnight through our CI environment. The next morning, we will start with verifying the Automation status report and re-run the failed test cases. If we find any issues, they will be tracked in our official defect tracking system. UI testing will be more focused on happy path since we have covered all the negative test cases through the API automation. So testers will get more time to do exploratory testing.

Root cause analysis will be done after each major release. We will decide whether to go for another deployment or move the defects to the backlog. For this decision, we take into account the facts: like severity, priority of the defects, the importance of the feature, and also how soon this feature will be used in production. We also maintain a root cause analysis report for each major release. Whatever the mitigation actions that need to be taken will also be included in the same report. This report will be used for future reference.

Once the testing work is complete, we share our test results with the client. Those are needed to get the managerial approval for the production deployment. The deployment will be performed by the cloud ops team, but both the Dev and QA teams will also participate in the deployment process.

Once the application gets deployed, the QA team will perform a high-level verification to make sure all the new features are included and the already existing functionality isn’t broken. This will conclude a successful production deployment.

Sankha Jayasooriya
Sankha Jayasooriya is an IT Professional with more than 8 years of experience in the Software Quality Assurance field. He is an ISTQB certified professional specialized in service level testing, automated testing, and manual testing. His areas of domain expertise extend to retail, innovation, banking and finance, enterprise software, robotics, and mobile testing. Sankha is a co-author of the “Multi-Domain Supported and Technology Neutral Performance Testing Process Framework” white paper and is also a regular blogger on Genius Quality—Medium.

The Related Post

Introduction All too often, senior management judges Software Testing success through the lens of potential cost savings. Test Automation and outsourcing are looked at as simple methods to reduce the costs of Software Testing; but, the sad truth is that simply automating or offshoring for the sake of automating or offshoring will only yield poor ...
Think you’re up for a challenge? Print this word search out! See if you can find all the words and learn a few new software testing terms in the process. To see how you’ve done, check your answers in the answer key below. *You can check the answer key here.
LogiGear Magazine March Issue 2018: Under Construction: Test Methods & Strategy
Trying to understand why fails, errors, or warnings occur in your automated tests can be quite frustrating. TestArchitect relieves this pain.  Debugging blindly can be tedious work—especially when your test tool does most of its work through the user interface (UI). Moreover, bugs can sometimes be hard to replicate when single-stepping through a test procedure. ...
March Issue 2020: Smarter Testing Strategies for The Modern SDLC
Introduction Many companies have come to realize that software testing is much more than a task that happens at the end of a software development cycle. They have come to understand that software testing is a strategic imperative and a discipline that can have a substantial impact on the success of an organization that develops ...
Test plans have a bad reputation, and perhaps, they deserve it! There’s no beating around the bush. But times have changed. Systems are no longer “black boxes” where QA Teams are separated from design, input, and architecture. Test teams are much more technically savvy and knowledgeable about their systems, beyond domain knowledge. This was an old ...
The V-Model for Software Development specifies 4 kinds of testing: Unit Testing Integration Testing System Testing Acceptance Testing You can find more information here (Wikipedia): http://en.wikipedia.org/wiki/V-Model_%28software_development%29#Validation_Phases What I’m finding is that of those only the Unit Testing is clear to me. The other kinds maybe good phases in a project, but for test design it ...
Differences in interpretation of requirements and specifications by programmers and testers is a common source of bugs. For many, perhaps most, development teams the terms requirement and specification are used interchangeably with no detrimental effect. In everyday development conversations the terms are used synonymously, one is as likely to mean the “spec” as the “requirements.”
I’ve been reviewing a lot of test plans recently. As I review them, I’ve compiled this list of things I look for in a well written test plan document. Here’s a brain dump of things I check for, in no particular order, of course, and it is by no means a complete list. That said, if you ...
Internet-based per-use service models are turning things upside down in the software development industry, prompting rapid expansion in the development of some products and measurable reduction in others. (Gartner, August 2008) This global transition toward computing “in the Cloud” introduces a whole new level of challenge when it comes to software testing.
This article was developed from concepts in the book Global Software Test Automation: Discussion of Software Testing for Executives. Article Synopsis There are many misconceptions about Software Testing. This article deals with the 5 most common misconceptions about how Software Testing differs from other testing. Five Common Misconceptions Some of the most common misconceptions about ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe