TestArchitect Corner: Capture Screens of Application Under Test during Automation Execution

Trying to understand why fails, errors, or warnings occur in your automated tests can be quite frustrating. TestArchitect relieves this pain. 

Debugging blindly can be tedious work—especially when your test tool does most of its work through the user interface (UI). Moreover, bugs can sometimes be hard to replicate when single-stepping through a test procedure.

Suppose you executed a long, automated test that contains a good deal of interaction with the interface of the Application Under Test (AUT), such as mouse clicks, keyboard input, menu item selection, etc. When viewing the generated test results, it may be difficult to understand why some fails, errors, or warnings have occurred. It would be easier to identify any issues if the test results were accompanied by snapshots of the screen’s display just before, during, and after any interactivity between the test and the AUT’s UI.

To address this problem, TestArchitect allows snapshots to be automatically taken of the AUT’s display at various critical points during test execution. By letting you observe the display state of the AUT at each stage of the test, you can have a better grasp of where and how a test or application is going wrong. Users can tell TestArchitect to capture screenshots during Test Automation with each UI interactive action. These screenshots help you to better visualize what took place in order to more easily debug any problems that have occurred.

The number of screenshots retained by TestArchitect is determined by user settings in the Screenshot recording panel of the Execute Test dialog box just prior to the test run.

Users can specify the events (Passed, Failed, or Warning/Error) for which associated screenshots are to be retained. They can also specify the number of preceding screenshot sets that are to be retained for each
qualified event. A single screenshot set consists of all the screenshots captured during a single UI-interactive action. The below image indicates that three screenshot sets are to be retained and logged for each Failed and Warning/Error event of the test: the screenshot set of the associated Failed/Warning/Error action and the screenshot sets of the two UI-interactive actions preceding it. Note that if the Keep field is left blank, screenshot sets for all preceding UI-interactive actions are retained.

Screenshots captured during testing are displayed in the Result Details and Failure/Error Summary tabs of local test results.

Once users click on a captured screenshot thumbnail in the Result Details tab, the screenshot viewer appears.

The screenshot viewer incorporates a number of functions (below).

  1. Fit screenshot to the Image Viewer panel (full screen)
  2. Go to the previous recorded UI-interacting action
  3. Go to the next recorded UI-interacting action
  4. Click on the action name to launch TestArchitect
  5. Client, displaying detailed description of the UI-interacting built-in action

Click on the action line number text to launch TestArchitect Client, which displays the corresponding line in its execution context.

TestArchitect doesn’t only snap pictures. When you have screenshot recording enabled, it can also record video of the entire automation process, storing it at the end of the test run as a video (.mp4) file on users’ machines.

Not only is this screenshot recording feature supported for executing tests on computers, it’s also supported on Android and iOS mobile devices. To learn more about this feature, visit testarchitect.com and download TestArchitect for free here. See how beneficial this time-saving feature can be when you’re testing.

Van Pham
Van Pham has more than 10 years of experience in software automation testing on various platforms and Customer/Product Support. A key member of the organization, Van mentors, manages, and motivates LogiGear’s Support teams to provide an exceptional Customer Support Experience. Van has her B.S. in Software Engineering from National University, and an M.S. in Engineering Management.

The Related Post

Internet-based per-use service models are turning things upside down in the software development industry, prompting rapid expansion in the development of some products and measurable reduction in others. (Gartner, August 2008) This global transition toward computing “in the Cloud” introduces a whole new level of challenge when it comes to software testing.
In software testing, we need to devise an approach that features a gradual progression from the simplest criteria of testing to more sophisticated criteria. We do this via many planned and structured steps, each of which brings incremental benefits to the project as a whole. By this means, as a tester masters each skill or area ...
Do testers have to write code? For years, whenever someone asked me if I thought testers had to know how to write code, I’ve responded: “Of course not.” The way I see it, test automation is inherently a programming activity. Anyone tasked with automating tests should know how to program. But not all testers are ...
Last week I went to StarWest as a presenter and as a track chair to introduce speakers. Being a track chair is wonderful because you get to interface more closely with other speakers. Anyway…one of the speakers I introduced was Jon Bach. Jon is a good public speaker, and I was pleasantly surprised that he ...
At VISTACON 2011, Harry sat down with LogiGear Sr. VP, Michael Hackett, to discuss various training methodologies. Harry Robinson Harry Robinson is a Principal Software Design Engineer in Test (SDET) for Microsoft’s Bing team, with over twenty years of software development and testing experience at AT&T Bell Labs, HP, Microsoft, and Google, as well as ...
Let’s look at a few distinctions between the two process improvement practices that make all the difference in their usefulness for making projects and job situations better! An extreme way to look at the goals of these practices is: what makes your work easier (retrospective) versus what did someone else decide is best practice (post-mortem)? ...
Test organizations continue to undergo rapid transformation as demands grow for testing efficiencies. Functional test automation is often seen as a way to increase the overall efficiency of functional and system tests. How can a test organization stage itself for functional test automation before an investment in test automation has even been made? Further, how ...
I’ve been reviewing a lot of test plans recently. As I review them, I’ve compiled this list of things I look for in a well written test plan document. Here’s a brain dump of things I check for, in no particular order, of course, and it is by no means a complete list. That said, if you ...
Internet-based per-use service models are turning things upside down in the software development industry, prompting rapid expansion in the development of some products and measurable reduction in others. (Gartner, August 2008) This global transition toward computing “in the Cloud” introduces a whole new level of challenge when it comes to software testing.
LogiGear Magazine March Testing Essentials Issue 2017
For mission-critical applications, it’s important to frequently develop, test, and deploy new features, while maintaining high quality. To guarantee top-notch quality, you must have the right testing approach, process, and tools in place.
From cross-device testing, to regression testing, to load testing, to data-driven testing, check out the types of testing that are suitable for Test Automation. Scene: Interior QA Department. Engineering is preparing for a final product launch with a deadline that is 12 weeks away. In 6 weeks, there will be a 1 week quality gate, ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe