TestArchitect Corner: Capture Screens of Application Under Test during Automation Execution

Trying to understand why fails, errors, or warnings occur in your automated tests can be quite frustrating. TestArchitect relieves this pain. 

Debugging blindly can be tedious work—especially when your test tool does most of its work through the user interface (UI). Moreover, bugs can sometimes be hard to replicate when single-stepping through a test procedure.

Suppose you executed a long, automated test that contains a good deal of interaction with the interface of the Application Under Test (AUT), such as mouse clicks, keyboard input, menu item selection, etc. When viewing the generated test results, it may be difficult to understand why some fails, errors, or warnings have occurred. It would be easier to identify any issues if the test results were accompanied by snapshots of the screen’s display just before, during, and after any interactivity between the test and the AUT’s UI.

To address this problem, TestArchitect allows snapshots to be automatically taken of the AUT’s display at various critical points during test execution. By letting you observe the display state of the AUT at each stage of the test, you can have a better grasp of where and how a test or application is going wrong. Users can tell TestArchitect to capture screenshots during Test Automation with each UI interactive action. These screenshots help you to better visualize what took place in order to more easily debug any problems that have occurred.

The number of screenshots retained by TestArchitect is determined by user settings in the Screenshot recording panel of the Execute Test dialog box just prior to the test run.

Users can specify the events (Passed, Failed, or Warning/Error) for which associated screenshots are to be retained. They can also specify the number of preceding screenshot sets that are to be retained for each
qualified event. A single screenshot set consists of all the screenshots captured during a single UI-interactive action. The below image indicates that three screenshot sets are to be retained and logged for each Failed and Warning/Error event of the test: the screenshot set of the associated Failed/Warning/Error action and the screenshot sets of the two UI-interactive actions preceding it. Note that if the Keep field is left blank, screenshot sets for all preceding UI-interactive actions are retained.

Screenshots captured during testing are displayed in the Result Details and Failure/Error Summary tabs of local test results.

Once users click on a captured screenshot thumbnail in the Result Details tab, the screenshot viewer appears.

The screenshot viewer incorporates a number of functions (below).

  1. Fit screenshot to the Image Viewer panel (full screen)
  2. Go to the previous recorded UI-interacting action
  3. Go to the next recorded UI-interacting action
  4. Click on the action name to launch TestArchitect
  5. Client, displaying detailed description of the UI-interacting built-in action

Click on the action line number text to launch TestArchitect Client, which displays the corresponding line in its execution context.

TestArchitect doesn’t only snap pictures. When you have screenshot recording enabled, it can also record video of the entire automation process, storing it at the end of the test run as a video (.mp4) file on users’ machines.

Not only is this screenshot recording feature supported for executing tests on computers, it’s also supported on Android and iOS mobile devices. To learn more about this feature, visit testarchitect.com and download TestArchitect for free here. See how beneficial this time-saving feature can be when you’re testing.

Van Pham
Van Pham has more than 10 years of experience in software automation testing on various platforms and Customer/Product Support. A key member of the organization, Van mentors, manages, and motivates LogiGear’s Support teams to provide an exceptional Customer Support Experience. Van has her B.S. in Software Engineering from National University, and an M.S. in Engineering Management.

The Related Post

LogiGear Magazine March Testing Essentials Issue 2017
At VISTACON 2011, Jane sat down with LogiGear Sr. VP, Michael Hackett, to discuss complex systems.
Creative Director at the Software Testing Club, Rob Lambert always has something to say about testing. Lambert regularly blogs at TheSocialTester where he engages his readers with test cases, perspectives and trends. “Because It’s Always Been Done This Way” Study the following (badly drawn) image and see if there is anything obvious popping in to ...
Introduction Many companies have come to realize that software testing is much more than a task that happens at the end of a software development cycle. They have come to understand that software testing is a strategic imperative and a discipline that can have a substantial impact on the success of an organization that develops ...
With this edition of LogiGear Magazine, we introduce a new feature, Mind Map. A mind map is a diagram, usually devoted to a single concept, used to visually organize related information, often in a hierarchical or interconnected, web-like fashion. This edition’s mind map, created by Sudhamshu Rao, focuses on tools that are available to help ...
Are you looking for the best books on software testing methods? Here are 4 books that should be on your reading list! The Way of the Web Tester: A Beginner’s Guide to Automating Tests By Jonathan Rasmusson Whether you’re a traditional software tester, a developer, or a team lead, this is the book for you! It ...
This article was originally featured in the May/June 2009 issue of Better Software magazine. Read the entire issue or become a subscriber. In my travels, I’ve worked with a number of companies that have attempted to assess the quality of their testing — or worse, their testers — using poorly considered metrics. Sometimes the measurement ...
March Issue 2020: Smarter Testing Strategies for The Modern SDLC
Most have probably heard the expression ‘less is more‘, or know of the ‘keep it simple and stupid‘ principle. These are general and well-accepted principles for design and architecture in general, and something that any software architect should aspire to. Similarly, Richard P. Gabriel (a major figure in the world of Lisp programming language, accomplished poet, and currently ...
The V-Model for Software Development specifies 4 kinds of testing: Unit Testing Integration Testing System Testing Acceptance Testing You can find more information here (Wikipedia): http://en.wikipedia.org/wiki/V-Model_%28software_development%29#Validation_Phases What I’m finding is that of those only the Unit Testing is clear to me. The other kinds maybe good phases in a project, but for test design it ...
David S. Janzen – Associate Professor of Computer Science Department California Polytechnic State University, San Luis Obispo – homepage LogiGear: How did you get into software testing and what do you find interesting about it? Professor Janzen: The thing I enjoy most about computing is creating something that helps people. Since my first real job ...
Experience-based recommendations to test the brains that drive the devices In essentially every embedded system there is some sort of product testing. Typically there is a list of product-level requirements (what the product does), and a set of tests designed to make sure the product works correctly. For many products there is also a set ...

Leave a Reply

Your email address will not be published.

Stay in the loop with the lastest
software testing news

Subscribe