Mobile testing by definition is an unstable, flaky and unpredictable activity.
When you think you covered all corners and created a “stable” environment, still, your test cycle often get stuck due to 1 or few items.
In this post, I’ll try to identify some of the key root causes for test automation flakiness, and suggest some preventive actions to eliminate them.
What Can Block Your Test Automation Flow?
From an ongoing experience, the key items that often block test automation of mobile apps are the following:
- Popups – security, available OS upgrades, login issues, etc.
- Ready state of DUTs – test meets device in a wrong state
- Environment – device battery level, network connectivity, etc.
- Tools and Tets Framework fit – Are you using the right tool for the job?
- Use of the “right” objects identifiers, POM, automation best practices
- Automation at Scale – what to automate, on what platforms?
All of the above contribute in one way or the other to the end-to-end test automation execution.
We can divide the above 6 bullets into 2 sections:
- Best Practices
Solving The Environment Factor in Mobile Test Automation
In order to address the test environment contribution to test flakiness, engineers need to have full control over the environment they operate in.
If the test environment and the devices under test (DUT) are not fully managed, controlled and secured the entire operation is at risk. In addition, when we state the term “Test Environment Readiness” it should reflect the following:
- Devices are always cleaned up prior to the test execution or are in a known “state”/baseline to the developers and testers
- If there are repetitive known popups such as security permissions, install/uninstall popups, OS upgrades, or other app-specific popups, they should be accounted for either in the pre-requisites of the test or should be prevented proactively prior to the execution.
- Network stability often is a key for unstable testing – engineers need to make sure that the devices are connected to WiFi or cellular network prior to testing execution start. This can be done either as a pre-requisite validation of the network or through a more generic environment monitoring.
Following Best Practices
In previous blogs, I addressed the importance of selecting the right testing frameworks, IDEs as well as leveraging the cloud as part of test automation at scale. After eliminating the risks of the test environment in the above section, it is important to make sure that both developers and test automation engineers follow proper guidelines and practices for their test automation workflow.
Since testing shifted left towards the development team, it is important that both dev and test align on few things:
- What to automate?
- On what platforms to test?
- How to automate (best practices)?
- Which tools should be used to automate?
- What goes into CI, and what is left outside?
- What is the role of Manual and Exploratory testing in the overall cycle?
- What is the role of Non-Functional testing in the cycle?
The above points (partial list) covers some fundamental questions that each individual should be asking continuously to assure that his team is heading in the right direction.
Each of the above bullets can be attributed to at list one if not many best practices.
- To address the key question on what to automate, here’s a great tool (see screenshot below) provided by Angie Jones. In her suggested tool, each test scenario should be validated through some kind of metric that adds to a score. The highest scored test cases will be great candidates for automation, while the lowest ones obviously can be skipped.
- To address the 2nd question on platform selection, teams should monitor their web and mobile ongoing traffic, perform market research and learn from existing market reports/guides that addresses test coverage.
- Answering the question “How to automate” is a big one :). There is more than 1 thing to keep in mind, but in general – automation should be something repetitive, stable, time and cost efficient – if there is something preventing one or more of these objectives, it’s a sign that you’re not following best practices. Some best practices can be around using proper object identifiers, others can reflect building the automation framework with proper tags and catches so when something breaks it is easy to address, and more.
- The question around tools and test frameworks again is a big one. Answering it right depends on the project requirements and complexity, the application type (native, web, responsive, PWA), test types (functional, non-functional, unit). In the end, it is important to have a mix of tools that can “play” nicely together and provide unique value without stepping on each other.
- Tests that enter CI should be picked very carefully. Here, it is not about the quantity of the tests but about the quality, stability, and value these tests can bring focusing on fast feedback. If a test requires heavy env. setup is flaky by nature, takes too much time to run, it might not be wise to include it in the CI.
- Addressing the various testing types in Qs 6 and 7 – it depends on the project objectives, however, there is a clear value for each of the following tests in mobile app quality assurance:
- Accessibility and performance provide key UX indicators about your app and should be automated as much as possible
- Security testing is a common oversight by many teams and should be covered through various code analysis, OWASP validations and more.
- Exploratory, manual and crowd testing provide another layer of test coverage and insights into your overall app quality, hence, should be in your test plan and divided throughout the DevOps cycle.
- Deep links (Appium)