Test engineers face a rapidly changing mobile application landscape, making mobile test automation a necessity.
We know that mobile apps are becoming increasingly complex along with the technological advances in tablets and smartphones.
Test engineers have to address multiple challenges while testing data-centric mobile apps, including the growing diversity of device features, platforms, and technologies.
Fortunately, mobile testing automation can help take care of the majority of the functional and non-functional intricacies of app behavior.
Test automation can be considered a mix of Environmental, Behavioral, Performance and Complete Ecosystem testing.
Environmental Testing
1 – Using Real Devices
When it comes to environmental testing, my first piece of advice is to use physical devices for testing and not emulators.
While we think that various tools available in the market can allow us to run automation on emulators, it is a possibility that they may not replicate actual devices.
Testing with actual devices ensures real time evaluation of apps for various environmental and device specific factors, such as benchmarking, analysis, and resolution. You also need to keep in mind the application behavior may be affected by the network type, latency, and carrier infrastructure in the target geographies. An automation testing strategy should emphasize periodically changing the available networks so that the entire spectrum of network technologies can be tested.
2 – Automated Network Switching
Switching networks can help understand the change in the application behavior and help identify any performance bottlenecks.
3 – Auto application installation through OTA
If your application supports different platforms and configurations, you need to verify the successful installation and upgrade on the target platforms. It is even more important now as there are different ways to distribute the software, such as via the internet, from a network location, or it can even be pushed to the end user’s device. An automation testing suite is incomplete if it is not providing installation of a third party application automatically over the air.
4- Manage notification services
While executing the test case automation, the process should be capable of handling or simulating events such as receiving a call, SMS or email and things like “battery low” indicators. An automation system should incorporate these types of events and then simulate the same while running the test cases. Simulation of these events are a must before certifying any application.
To write the complete automation, you should definitely have a third party application running on the devices, which opens multiple threads and multiple connections to the server, performs long running operations, and so on. This will help the automation tool to analyze the performance of original applications and help procure the exact status of memory and CPU when it is being eaten up by another service.
Behavioral Testing
1- Screen Orientation Change
Most devices now support a screen orientation feature, so your automation strategy should include orientation testing.
On some platforms – including Windows Mobile & Android – the orientation change can be triggered by extending the soft keyboard. An automation ecosystem should include automatic switching of orientation so that the application can be tested on all UI changes.
In the case of Android, automation should define complete configuration change events for all the activities, so that the UI can be changed automatically at runtime.
Your automation strategy should always include simulation of a “no network” scenario on devices, for example: when a device is switched to airplane mode. This will help to understand the behavior of an application when there is no network access.
For applications that support different hardware features, we advise that you take appropriate measures to automate the use of a device’s native capabilities. For example, if an application has the functionality to take photographs, the automation should cover this scenario while opening the camera interface and performing the relevant steps to take a picture.
A test of your application with active native features defines whether the application is compatible with a platform’s native capabilities and also indicates whether any changes in the native features would affect your application. An automation tool or script should be capable enough to execute these hardware switches.
2 – Automatic simulation of location attributes
To explain this best practice for automation testing, I would like to share an interesting example to make you understand the use of a location attribute simulation.
Let’s suppose your application’s business logic has to show nearby ATMs in your current location. But if you are based outside the US and using a US server, your server would only provide the list of ATMs which are located in the US.
A good automation system should be able to simulate any difference in location. So if you run this application outside of the US, the latitude and longitude coordinates would be injected into the system from which you would get the list of ATMs.
3 – System event versus User events
System events are triggered by an application developer writing a piece of code into the application. For example: a developer writes a code to inject a button click event automatically. User events are those which are performed by the end-user, such as pressing a key on a device’s keypad.
In both scenarios, the final system event as well as the user event will be triggered at the same location. Your automation tool should differentiate between them.
The stability and reliability of an application also depends on how the application handles improper user input. Try checking the behavior of the application by introducing exceptions, such as entering invalid data. For example, put characters in a field where numbers are expected. Or, introduce unusual situations, like turning off the network while downloading a file. This will help improve the stability of your application and lead you to any weak points.
To automate the rigorous flows, the automation system should provide a way that makes the test-cases data driven. This means that users can have a few sets of test data and then automatically insert multiple key value pairs.
Performance Testing
1 – Collect CPU and Memory Analytics
One of the best mobile test practices is to collect memory, CPU and battery analytics from time to time while testing . This information can be useful in identifying and addressing any problem areas.
A CPU intensive, memory eating, and battery draining app is sure to be disliked by end-users and is likely to be a commercial failure. It is crucial to benchmark, measure and optimize your application based on these performance parameters.
But the question is how can automation make this analysis better?
An automation testing tool should gather CPU, memory and battery statistics, as each test case executes. This way, the developer can get analytics in an application flow and can immediately identify the problem.
2 – Application Responsiveness
Another important non-functional requirement is application responsiveness. A responsive UI is snappy – it responds to inputs promptly, and doesn’t leave its users hanging.
While planning the automation of your application, you should calculate the collapse time of the application’s screens and the time required for screen navigation.
With the technology advancements and maturity in mobile platforms, the user experience has become a very important consideration. The app should not only look good but also perform well.
3 – UI Navigation Performance
UI should be effective and transitions should be fast. Your automation strategy should focus on UI response-time based on the actual time taken by the application to navigate and draw screens. Since the automation would be running on multiple devices, you should compare the response time on each screen and update the test team on the devices that are taking longer than others.
4 – Usage of Performance analytics tools
You can easily check the performance and memory usage of your mobile applications with any one of a myriad of tools available on the market such as Traceview and Eclipse Memory Analyzer for Android; J2ME profiler for Java phones; JDE memory Analyzer for BlackBerry; and Instruments for iOS.
These tools help by providing information about the memory usage, CPU cycles, etc. They also help debug applications and can create performance profiles.
Ideal Test Ecosystem:
Having seen the best practices of mobile application automation testing, let’s see what would make an ideal test ecological system.
1 – Test & Result protocol
While talking about automation, you should clearly define the Test and Result protocol. Ensure that the automation tools/scripts define the protocol for failure with “correct reason of failure”, “Screen shot from the device” and “complete logging information from the native shell”.
2- Device Management
Device management is one of the most important practices recommended for an ideal test ecosystem. This infers that there should be a system to manage your devices; and categorize them on the basis of parameters like manufacturer, versions, OS, etc…
For example, there are a huge number of Android devices available in the market. If we want complete functional testing on a variety of Android devices, we need to automatically provision them via the cloud so that they can be directly used by customers and run the test cases across geographies.
3- Test Case Management
In any complex application there are a huge number of test cases ranging from hundreds to thousands and sometimes more. An ideal test automation management system should be able to store, organize, execute and view test cases and their results reports. Any number of test cases can be assigned to their target devices using automation, which in turn executes these tests automatically and provides complete results to the user.
4 – Result Reporting
This refers to reporting of crucial app statistics that help evaluate the overall application stability and readiness for the market. These reports are also helpful for iterative testing cycles.
We suggest that Result reports should be comprehensive and share the complete scenarios of testing and the reason of failure for failed test cases.