Mobile app test automation

App Logo
Our iOS and Android apps are amongst the most popular apps in the Netherlands.  Our apps were launched in 2011, in 2012 we had 3 releases a year per platform.   This has now grown to releasing every 2-3 weeks.   The way we developed and tested the apps had to grow from a 2 person team to a cross functional team to cover iOS, Android , API development, testing, design and UX.

Regular Releases

We review the comments made in the App and Play stores and try to include fixes or new features that users have requested in every release.  This has led to a continual rise in our app store reviews and ratings.  To help increase the speed in which we can release, we have automated as much as we can.

In general, we release to Android first, with a small percentage of our users getting an update, we monitor for crashes or other issues and then increase the percentage of users that get the new app.  To become a Beta user and get early access to our new releases – go here.


When new app code is pushed to the develop branch – a test run is started in a remote device cloud, where we can choose various screen sizes, phone and tablet, different rotations, OS versions and device make and models.   If all of the regression test runs are green, then we will build a release candidate which is available to everyone in the teams to download and test with.

iPhone Device Cloud

We use the open-source test framework Calabash for our automated regression acceptance tests enabling us to write the same tests for Android and IOS despite different UI patterns on the different platforms.  The tests are run on a mixture of local devices and the Xamarin device cloud

Reporting of Tests


We use Jenkins for our CI environment, and all jobs are shown like this on monitors around the office

– Green background means the last run passed without failures,

– Red background means it had failures with the final failure count is shown.

– The dial icon means the tests are in progress

– Orange background with dial means it failed last run but no failures yet this run

– Red background with dial means its already failing and shows current failed test count.

This is really useful because at a glance we can see how ‘red’ a build is. We can already look at failures while a test run is ongoing.

By clicking on the box we can look at the failing tests


We have the Test Feature name, Scenario name, screenshot and error message

On the right hand side there is an indication of the status of the last 30 runs.

You can see that this is not a flaky test but is actually failing because of a push 5 runs ago.

How does it work?

In Cucumber, we can use the ‘After’ hook to execute code after every test is completed.  We call our internal test reporting api with the details of the scenario, name, status (PASS|FAIL) and if failed a screenshot and stack trace of the error.

As a future improvements – we would like to know the memory usage and the CPU usage so we could show trends and highlight if we are suddenly using more memory than normal

Rerunning Failed Tests

Due to the fact that we are running against a test environment also used for backend testing, which can have services restarted or broken at any point, we have added a retry mechanism for failing tests.

In Jenkins, within the same job, when the first test run is completed the status is checked, if there are failed tests, then the second run is started for only the failed tests, when this is completed a third and final run is started if there are still failing tests.

Example from our jenkins console log

2015-03-17 18:49:56 +0100 Status: Testing: 1 running, 0 enqueued, 0 complete…

2015-03-17 18:50:06 +0100 Status: Finished Done!

Total scenarios: 21

20 passed

1 failed

Total steps: 63

Test Report:

Should retry

2015-03-17 18:50:33 +0100 Status: Testing: 0 running, 1 enqueued, 0 complete…

2015-03-17 18:54:42 +0100 Status: Finished Done!

Total scenarios: 1

1 passed

0 failed

Total steps: 4

Test Report:

Should not retry

How does it work?

Cucumber allows us to use the ‘Around’ hook to determine if the test scenario should run or not.

Around do | scenario, block|

if should_run_scenario?(scenario)



We call our reporting api and get a json response of the scenarios in the previous attempts within the same Jenkins run and whether they passed or failed

Should_run_scenario? Returns true if it’s the first attempt or the test failed in the previous attempt within the same Jenkins run.


Mobile Automation is still quite early in its development but we have seen vast improvements in the stability and reliability of the tests we are running.   Its not at the same level of sophistication as browser testing with Selenium but its getting better and having the ability to run on devices rather than simulators or emulators has increased our stability and coverage.


Cucumber –

Calabash –

Xamarin –

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s